Numerical solution of linear control systems using interpolation scal

Loading...

Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 4, No. 2, 2016, pp. 139-150

Numerical solution of linear control systems using interpolation scaling functions Behzad Nemati Saray∗ Young Researchers and Elite Clube, Marand Branch, Islamic Azad University, Marand, Iran. E-mail: [email protected]

Mohammad Shahriari Department of Mathematics, Faculty of Science, University of Maragheh, Maragheh, Iran. E-mail: [email protected]

Abstract

The current paper proposes a technique for the numerical solution of linear control systems. The method is based on Galerkin method, which uses the interpolating scaling functions. For a highly accurate connection between functions and their derivatives, an operational matrix for the derivatives is established to reduce the problem to a set of algebraic equations. Several test problems are given, and the numerical results are reported to show the accuracy and efficiency of this method.

Keywords. Linear control systems, Galerkin method, Interpolating scaling functions, operational matrix. 2010 Mathematics Subject Classification. 37L65 , 49N05, 65T60.

1. Introduction The main purpose of this manuscript is to study numerical method based on interpolating scaling functions for the solution of the following linear optimal control problem (OCP) x˙ = Ax(t) + Bu(t), x(t0 ) = x0 , Z  1 1 tf T T J = x(tf ) Sx(tf ) + x P x + 2xT Qu + uT Ru dt, 2 2 t0

(1.1)

where x ∈ Rn , u ∈ Rm , A ∈ Rn×n and B ∈ Rm×n . The control u(t) is an admissible control if it is piecewise continuous in t for t ∈ [t0 , tf ]. These values belong to a given closed subset U of R+ . The input u(t) is derived by minimizing the quadratic performance index J, where S ∈ Rn×n , P ∈ Rn×n and Q ∈ Rn×m are positive semi-definite matrices and R ∈ Rm×m is positive definite matrix. Optimal control theory are encountered in various fields such as engineering, economics, aerospace, chemical engineering, robotic and finance. We know, it is difficult to solve generally optimal control problems. Thus, the key to solve many of these real Received: 29 August 2016 ; Accepted: 12 December 2016. ∗ Corresponding author.

139

140

B. N. SARAY AND S. AUTHOR

world problems are numerical methods. In order to solve linear quadratic OCPs, various numerical approaches are proposed by researchers. Yousefi et al. presented the He’s variational iteration method [23] for the linear optimal control problem. Also see [7] for the use of the Adomian decomposition method for solving this equation. In [5], homotopy perturbation method was applied to solve optimal control problems. Also some other methods are used for solving this problem to transform the new problem such as converting the problem to differential inclusion form [11], or measure space and then solved in [4], genetic algorithm, and Others deal with the optimal control problem directly. For example see [6, 8, 9, 10, 16, 20, 22] and the references therein. Over the last decade, wavelets have found applications in numerous areas of mathematics, engineering, computer science, statistics, physics, etc [19]. Multiwavelets are revealed to possess several advantages with respect to scalar wavelets. The reason of their success is due to the fact that, unlike scalar wavelets, multiwavelets can be constructed with several simultaneous properties, such as orthogonality, symmetry, having high numbers of vanish moments and closed form [18, 17]. In this work, we use the interpolating scaling functions which are introduced by Alpert [1, 2]. In addition to the simultaneous properties which proposed for multiwavelets, interpolating scaling functions have interpolating property. This feature reduces the time and computation cost. Also operational matrix of derivative is derived in [14, 3] helps to save computing time. So, we use the interpolating scaling functions to solve Eq. (1.1). The outline of this paper is as follows. In Section 2, we describe the basic formulation of the interpolating scaling functions required for our subsequent development. In Section 3 the proposed method is used to approximate the solution of the problem. As a result a set of algebraic equations are formed and a solution of the considered problem is introduced. In Section 4, we report our numerical findings and demonstrate the accuracy of the proposed numerical scheme.

2. The interpolating scaling functions Suppose Pr be the Legendre polynomial of order r where r is a fixed nonnegative integer number, and let τk for k = 0, · · · , r − 1 denote the roots of Pr . The interpolating scaling functions (ISF ) are given by [14] ( q 2 t ∈ [0, 1], ωk Lk (2t − 1), φk (t) := 0, otherwise, where ωk are the Gauss–Legendre quadrature weights given as ωk =

2 , rP 0 r (τk )Pr−1 (τk )

and Lk (t) are the Lagrange interpolating polynomials given as [15] Lk (t) =

r−1 Y i=0, i6=k



t − τi τk − τi

 .

CMDE Vol. 4, No. 2, 2016, pp. 139-150

141

for k = 0, · · · , r − 1. Now we can expand any polynomials g on [0, 1) of degree less than r by using the functions φ0 , · · · , φr−1 as r−1 X

g(t) =

dk φk (t),

k=0

where the coefficients dk are given by r ωk dk = g τˆk ), 2 where τˆk =

k = 0, · · · , r − 1, τk + 1 . 2

Let n

φknl (t) := 2 2 φk (2n t − l),

k = 0, · · · , r − 1, l = 0, · · · , 2n − 1,

(2.1)

where n is a fixed nonnegative integer number, then we have the following orthonormality conditions [14] Z 1 ´ φknl (t)φkn´l (t)dt = δl´l δkk´ , k, k´ = 0, · · · , r − 1, l, ´l = 0, · · · , 2n − 1. 0

2.1. The function approximation. For any two fixed nonnegative integer numbers r and n, the function f (t) ∈ L2 [0, 1] represented by ISF expansion as n

f (t) ≈

r−1 2X −1 X

sknl φknl (t) = S T Φ(t),

(2.2)

k=0 l=0

where  r−1 r−1 0 0 T S = s0n0 , . . . , sr−1 (2.3) n0 |sn1 , . . . , sn1 | . . . |sn,2n −1 , . . . , sn,2n −1 ] ,  0 r−1 r−1 r−1 0 0 Φ(t) = φn0 (t), . . . , φn0 (t)|φn1 (t), . . . , φn1 (t)| . . . |φn,2n −1 (t), . . . , φn,2n −1 (t)]T , and the coefficients cknl are computed as Z 1 Z sknl = f (t)φknl (t)dt = 0

hl+1

f (t)φknl (t)dt,

hl

where l , l = 0, · · · , 2n − 1. 2n These coefficients are computed by using Gauss-Legendre quadrature as r ωk f (2−n (ˆ τk + l)), k = 0, . . . , r − 1, l = 0, . . . , 2n − 1. sknl = 2−n/2 2 hl =

(2.4)

Also the function g(x, t) ∈ L2 ([0, 1] × [0, 1]), represented by ISF expansion as g(x, t) ≈

N X N X i=1 j=1

gij Φi (x)Φj (t) = ΦT (x)GΦ(t),

(2.5)

142

B. N. SARAY AND S. AUTHOR

where G is an N × N matrix as   g11 · · · g1N  .. ..  , G= . .  gN 1

···

gN N

n

where N = r2 and Z

1

1

Z

g(x, t)Φi (t)Φj (x)dtdx,

gi,j =

i, j = 1, 2, . . . , N,

0

0

so that, by applyeing the method of [14] we get r r  ωk ωk0 −n gi,j = 2 g 2−n (ˆ τk + l), 2−n (ˆ τk0 + l0 ) , 2 2 where l =

i−k−1 r

and l0 =

j−k0 −1 , r

(2.6)

k, k 0 = 0, 1, · · · , r − 1.

2.2. The operational matrix of the derivative. Suppose that the derivative of f (t) in (2.2) given by n

r−1 2X −1 X d f (t) = s˜knl φknl (t) = S˜T Φ(t), dt

(2.7)

k=0 l=0

where S˜ is a vector defined similarly to (2.3). We obtain the relation between S and S˜ by S˜ = DS,

(2.8)

where D is the operational matrix of the derivatives [12] for the ISFs. Using Eq. (2.7) we get s˜knl as   Z hl+1 d k k s˜nl = φnl (t) f (t) dt, k = 0, . . . , r − 1, l = 0, . . . , 2n − 1. dt hl By integration by parts from the above integral, we get   Z hl+1  h d k s˜knl = f (t)φknl (t) hl+1 − φnl (t) dt. f (t) l dt hl Using (2.1) and (2.2), we get r−1 X   s˜knl = 2(n/2) f (hl+1 )φk (1) − f (hl )φk (0) − 2n qki sinl , i=0

where 1

 d k qki = φ (t) φ (t) dt, k, i = 0, 1, · · · , r − 1. dt 0 Employing the Gauss-Legendre quadrature formula, we obtain r ωi d k qki = φ (ˆ τi ), k, i = 0, 1, · · · , r − 1. 2 dt Z

i



(2.9)

CMDE Vol. 4, No. 2, 2016, pp. 139-150

143

To evaluate f (hl ) and f (hl+1 ) we use the average of left and right limits on hl as r−1 X

1 f (hl ) = 2

sin,l−1 φin,l−1 (hl ) +

i=0

r−1 X

! sinl φinl (hl ) ,

l = 1, · · · , 2n − 1.

i=0

(2.10) Using (2.1), we can express Eq. (2.10) as 1 f (hl ) = 2 2 n 2

r−1 X

sin,l−1 φi (1)

i=0

+

r−1 X

! sinl φi (0)

,

l = 1, · · · , 2n − 1. (2.11)

i=0

Also, to evaluate the values of function f at the points h0 and h2n we have f (h0 ) =

r−1 X i=0

f (h2n ) =

n

sin0 φin0 (h0 ) = 2 2

r−1 X i=0

r−1 X

sin0 φi (0),

(2.12)

i=0

n

sin,2n −1 φin,2n −1 (h2n ) = 2 2

r−1 X

sin,2n −1 φi (1).

(2.13)

i=0

Substituting (2.11)–(2.13) in Eq. (2.9), we obtain "r−1  #  r−1 X 1 X 1 i k n i k i k k i i s˜n0 = 2 φ (1)φ (1) − φ (0)φ (0) − qki sn0 + φ (0)φ (1)sn1 , 2 2 i=0 i=0 "r−1    r−1  X X 1 i 1 1 k n φ (1)φk (1) − φi (0)φk (0) − qki sinl s˜nl = 2 − φi (1)φk (0) sin,l−1 + 2 2 2 i=0 i=0 # r−1 X 1 i + φ (0)φk (1)sin,l+1 , l = 0, · · · , 2n − 2, 2 i=0 "r−1 #  r−1  X X 1 1 i k i k i i k k n i φ (1)φ (1) − φ (0)φ (0) − qki sn,2n −1 . − φ (1)φ (0)sn,2n −2 + s˜n,2n −1 = 2 2 2 i=0 i−0

From the above equations, the matrix D can be expressed as a block tridiagonal matrix as   R H   −H T R H     . . . .. .. ..   , D = 2n    .. .. ..   . . .    −H T R H  −H T R

144

B. N. SARAY AND S. AUTHOR

where, each block is an r × r matrix and for k, i = 1, ..., r, we have 1 i φ (1)φk (1) − φi (0)φk (0) − qki , 2 1 1 [R]ki = φi (1)φk (1) − φi (0)φk (0) − qki , 2 2   1 i i k R ki = φ (1)φ (1) − φ (0)φk (0) − qki , 2 1 i k [H]ki = φ (0)φ (1). 2 Since Eqs. (2.9)–(2.13) are exact for polynomials up to degree r − 1 so the operational matrix of the derivative is exact for polynomials up to degree r − 1. [R]ki =

3. Description of the numerical methods Let us begin to solve the following linear optimal control problem (OCP ). For this purpose, we consider Pontryagin’s maximum principle (P M P ) for system (1.1) and achieve the optimal control law u∗ (t) = −k(t)x(t). According to the P M P , one has ∂H = −P x − Qu − AT λ, λ˙ = − ∂x ∂H = QT x + Ru + B T λ = 0, ∂u where H is Hamiltonian of the system (1.1), so  1 T x P x + 2xT Qu + uT Ru + λT (Ax + Bu) , 2 also λ ∈ Rn is co-state vector. The optimal control is computed by

(3.1) (3.2)

H(x, u, λ, t) =

(3.3)

u∗ = −R−1 QT x − R−1 B T λ,

(3.4)

where λ and x are the solution of the following Hamiltonian system  x˙ = [A − BR−1 QT ]x − BR−1 B T λ, λ˙ = [−P + QR−1 QT ]x + [QR−1 B T − AT ]λ,

(3.5)

with the initial condition x(t0 ) = x0 and the terminal condition λ(tf ) = Sx(tf ) [13]. This system of equation is linear, so we obtain      x F (t, tf ) G(t, tf ) x(tf ) = , (3.6) λ L(t, tf ) M (t, tf ) λ(tf ) where F , G, L and M are n × n matrices. Applying the terminal condition and so x(t) = (F + GS)x(tf ), λ(t) = (L + M S)x(tf ).

(3.7)

If F + GS is invertible, we get λ(t) = (L + M S)(F + GS)−1 x(t). | {z } Y (t,tf )

(3.8)

CMDE Vol. 4, No. 2, 2016, pp. 139-150

145

After differentiating with respect to the t from Eq.(3.8), we have ˙ λ(t) = Y˙ (t, tf )x(t) + Y (t, tf )x(t). ˙

(3.9)

Using equations (3.5) and (3.8) from [13], one can write Y˙ = (Y B + Q)R−1 (B T Y + QT ) − Y A − AT Y − P,

(3.10)

also by applying the optimal control law (3.4), we have u∗ (t) = −R−1 QT x − R−1 B T Y (t, tf )x(t).

(3.11)

According to equation (3.8) the terminal conditions for equation (3.10) are F (tf , tf ) = I, L(tf , tf ) = 0,

G(tf , tf ) = 0, M (tf , tf ) = I.

(3.12)

By considering the following variables V (t) = F (t, tf ) + G(t, tf )S, W (t) = L(t, tf ) + M (t, tf )S, and substituting these new variables into (3.7) and then into (3.5), one has   V˙ (t) = A − BR−1 QT V(t) − BR−1 B T W (t),  ˙ (t) = −P + QR−1 QT V (t) + QR−1 B T − AT W (t), W

(3.13)

(3.14)

with conditions V (tf ) = I and W (tf ) = S. Assume that we expand V (t) and W (t) using interpolating scaling functions as V (t) ≈ VΦ(t), W (t) ≈ WΦ(t),

(3.15)

where V and W are the (n × N ) unknown vector. By differentiating from both sides of Eq. (3.15), and using Eq. (2.10), we get V˙ (t) ≈ VDΦ(t), ˙ (t) ≈ WDΦ(t). W Now replacing (3.15) and (3.16) in (3.14) yields     −1 T Q V + BR−1 B T W Φ(t) = 0, VD − A − BR −1  VD − −P + QR QT V − QR−1 B T − AT W Φ(t) = 0. The entries of vector Φ(t) are independent, so we obtain   −1 T VD − A − BR−1 QT V W = 0,   + BR B −1 T VD − −P + QR Q V − QR−1 B T − AT W = 0,

(3.16)

(3.17)

(3.18)

Eq. (3.18) gives a system of linear equations with n × N equations and n × N unknowns. This system of equations can be solved to find V and W. So the unknown function u(t) and x(t) may be found using Eq. (3.15).

146

B. N. SARAY AND S. AUTHOR

4. Numerical Experiments In this section, some numerical examples are presented to illustrate the validity and the merits of the new technique. We report l∞ error of the solution that is defined as L∞ = max |ui − u ˜i |, 0≤i≤10

where ui and u ˜i are the exact and computed values of the solution u at the points i ti = 10 , i = 0, 1, · · · , 10, respectively. Example 4.1. Consider a single-input scalar system as follows: x˙ = −x(t) + u(t), Z 1 (x2 (t) + u2 (t))dt, J = 1/2

(4.1)

0

with initial condition x(0) = 1.

(4.2)

The exact solution of this problem is [13] √ √  h( 2t),√ x(t) = cosh(√ 2t) + β sin √ √ u(t) = (1 + 2β) cosh( 2t) + ( 2 + β) sinh( 2t), where

√ √ √ cosh( 2t) + 2 sinh( 2t) √ √ . β = −√ 2 cosh( 2t) + sinh( 2t)

After using the method proposed in the previous section, we obtain   V˙ (t) = −V (t) − W (t), ˙ (t) = −V (t) + W (t), W  V (0) = 1, W (1) = 0, where, the following optimal control law may be computed by u∗ (t) = −W (t). Table 1, 2 consist of L∞ norm of example 1 for N = {6, 8}. Figure 1 shows the plot of approximate solutions of Eq (4.1) also Figure 2 demonstrates the plot of absolute errors. Table 1. L∞ error of x(t) for various values of N for Example (4.1). t N =6 N =8

0 0 0

0.2 4.68e-6 1.78e-8

0.4 3.62e-6 1.44e-8

0.6 3.05e-6 1.22e-8

0.8 2.77e-6 1.09e-8

1 2.32e-6 9.60e-9

CMDE Vol. 4, No. 2, 2016, pp. 139-150

147

Table 2. L∞ error of λ(t) for various values of N for Example (4.1). t N =6 N =8

0 1.39e-6 5.46e-9

0.2 6.80e-7 2.58e-9

0.4 4.27e-8 4.13e-10

0.6 7.91e-7 1.88e-9

0.8 1.68e-6 5.0e-9

1 0 0

Figure 1. Plot of approximation solutions for example 1 (left x(t)) (right λ(t))

Figure 2. Plot of L∞ errors for example 1 (left x(t)) (right λ(t))

Example 4.2. Consider a single-input scalar system as follows: x(t) ˙ = u(t), Z 1 J= (x2 (t) + u2 (t))dt, 0

(4.3)

148

B. N. SARAY AND S. AUTHOR

with the initial condition x(0) = 0.

(4.4)

For this example, the analytical solutio is [21]   x(t) = e(et −e( −t)) , 2(e2 −1) t ( −t))  u(t) = e(e +e . 2 2(e −1)

Therefore, we should have   V˙ (t) = − 12 W (t), ˙ (t) = −2V (t), W  V (0) = 0, W (1) = 0. Also, we can obtain the following optimal control law u∗ (t) = −W/2, Table 3, 4 consist of L∞ norm of example 2 for N = {6, 8}. Figure 3 shows the plot of approximate solutions of Eq (4.3) also Figure 4 demonstrates the plot of absolute errors. Table 3. L∞ error of x(t) for various values of N for Example (4.3). t N =6 N =8

0 0 0

0.2 4.12e-7 7.96e-10

0.4 3.08e-7 6.25e-10

0.6 2.33e-7 4.78e-10

0.8 1.72e-7 3.50e-10

1 1.01e-7 2.24e-10

Table 4. L∞ error of λ(t) for various values of N for Example (4.3). t N =6 N =8

0 5.37e-7 1.07e-9

0.2 4.78e-7 9.29e-10

0.4 4.05e-7 7.88e-10

0.6 3.50e-7 6.79e-10

0.8 3.19e-6 5.97e-10

1 0 0

5. Conclusion In this paper, we presented a numerical scheme for solving optimal control problems. This technique is based on interpolating scaling functions and Galerkinmethod. The method tested on several examples taken from the literature to observe the efficiency of the new technique. The numerical results given in the previous section demonstrate the accuracy of this scheme. The obtained results show that this techniques can solve the problem effectively. Because of interpolating property of scaling functions, this system of equations are solved rapidly by using this method. We used Maple to solve this system of equations.

CMDE Vol. 4, No. 2, 2016, pp. 139-150

149

Figure 3. Plot of approximation solutions for example 2 (right x(t)) (left λ(t))

Figure 4. Plot of L∞ errors for example 2 (right x(t)) (left λ(t))

Acknowledgment This work was supported by the Young Researchers and Elite Clube, Marand Branch, Islamic Azad University, Marand, Iran [grant number 94171 ]. We would like to thank one of the referees for his comments. References [1] B. Alpert, G. Beylkin, D. Gines, L. Vozovoi, Adaptive solution of partial differential equations in multiwavelet bases, J. Comput. Phys., 182 (2002), 149–190. [2] B. Alpert, G. Beylkin, R. R. Coifman, V. Rokhlin, Wavelet–like bases for the fast solution of second–kind integral equations, SIAM J. Sci. Statist. Comput., 14 (1993), 159–184. [3] M. Dehghan, B. N. Saray, and M. Lakestani, Mixed finite difference and Galerkin methods for solving Burgers equations using interpolating scaling functions, Math. Meth. Appl. Sci., 37 (2014), 894-912.

150

B. N. SARAY AND S. AUTHOR

[4] S. Effati, M. Janfada and M. Esmaeili, Solving the optimal control problem of the parabolic PDEs in exploitation of Oil, Journal of Mathematical Analysis and Applications, 340(1) (2008), 606–620. [5] S. Effati, H. Saberi Nik. Solving a class of linear and nonlinear optimal control problems by homotopy perturbation method. IMA J. Math. Control Inform. 28(4) (2011), 539–553. [6] G. N. Elnagar, State-control spectral Chebyshev parameterization for linearly constrained quadratic optimal control problems, Computational Applied Mathematics, 79(1) (1997), 19–40. [7] A. Fakharian, M.T. Hamidi Beheshti, A. Davari, Solving the b HamiltonJacobiBellman equation using Adomian decomposition method. Int. J. Comput. Math., 87 (2010), 2769–2785. [8] O. S. Fard and H. A. Borzabadi, Optimal control problem, Quasi-assignment problem and genetic algorithm, Proceedings of World Academy of Science, Engineering and Technology, 21 (2007), 70–43. [9] H. Hashemi Mehne and A. Hashemi Borzabadi, A numerical method for solving optimal control problem using state parametrization, Numerical Algorithms, 42(2) (2006), 165–169. [10] H. P. Hua, Numerical solution of optimal control problems, Optimal Control Applications and Methods, 21(5) (2000), 233–241. [11] A. V. Kamyad, M. Keyanpour and M. H. Farahi, A new approach for solving of optimal nonlinear control problems, Applied Mathematics and Computation, 187(2) (2007), 1461–1471. [12] M. Lakestani, M. Dehghan, Numerical solution of Riccati equation using the cubic B–spline scaling functions and Chebyshev cardinal functions, Computer Physics Communications, 181 (2010), 957–966. [13] H. Saberi Nik. S. Effati. A. Yildirim, Solution of linear optimal control systems by differential transform method, Neural Comput and Applic, 23 (2013), 1311-1317. [14] M. Shamsi, M. Razzaghi. Numerical solution of the controlled duffing oscillator by the interpolating scaling functions, Electrmagn. Waves and Appl, 18(5) (2004), 691–705. [15] M. Shamsi, M. Razzaghi, Solution of Hallen’s integral equation using multiwavelets, Comput. Phys. Comm., 168 (2005), 187–197. [16] H. R. Sirsena and K. S. Tan, Computation of constrained optimal controls using parameterization techniques, IEEE Transactions on Automatic Control, 19(4) (1974), 431–433. [17] G. Strang and V. Strela, Orthogonal multiwavelets with vanishing moments, OPT. ENG., 33(7) (1994), 2104-2107. [18] V. Strela, P. N. Heller, G. Strang, P. Topiwala, and C. Heil, The application of multiwavelet filterbanks to image processing, IEEE T. IMAGE PROCESS, 8(4) (1999), 548-563. [19] W. Sweldens, The Lifting Scheme: A Construction of Second Generation Wavelets, SIAM J. Math. Anal., 29(2) (1998), 511-546. [20] K. L. Teo, C. J. Goh and K. H. Wong, A unified computational approach to optimal control problem, Longman Scientific and Technical, Harlow, 1991. [21] E. Tohidi, H. Saberi Nik, A Bessel collocation method for solving fractional optimal control problems, Applied Mathematical Modelling, 39 (2015), 455-465. [22] J. Vlassenbroeck and R. V. Dooren, A Chebyshev technique for solving nonlinear optimal control problems, IEEE Transactions on Automatic Control, 33(4) (1998), 333-340. [23] S. A. Yousefi , M. Dehghan, A. Lotfi. Finding the optimal control of linear systems via He’s variational iteration method. Int J Comput Math 87(5) (2010), 1042-1050.

Loading...

Numerical solution of linear control systems using interpolation scal

Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 4, No. 2, 2016, pp. 139-150 Numerical solution of linear control syst...

441KB Sizes 3 Downloads 13 Views

Recommend Documents

numerical methods for linear control systems - GBV
Preface xxiii. Acknowledgments xxix. About the Author xxxi. List of Algorithms xxxiii. Notations and Symbols xxxvii. CHA

Numerical Methods for Linear Control Systems - ScienceDirect
The online version of Numerical Methods for Linear Control Systems by Biswa Nath Datta on ScienceDirect.com, the world's

Numerical Solution of Non—Linear Structural Problems
state of the structure. In the construction of these theories it is common practice to start with a set of strain measur

Optimization Of Linear Control Systems
control systems doc, optimization of linear control systems epub optimization of linear control systems ebook, optimizat

Buy Numerical Methods for Linear Control Systems - Microsoft Store
Feb 24, 2004 - Get the Numerical Methods for Linear Control Systems at Microsoft Store and compare products with the lat

Linear Control Systems | SpringerLink
Anyone seeking a gentle introduction to the methods of modern control theory and engineering, written at the level of a

Numerical Methods for Linear Control Systems: Biswa Datta
"This is a very comprehensive book presenting the state-of-the-art in numerical techniques for linear systems and contro

Linear Control Systems
Note : For time-invariant control systems – in the controllability definition – the initial time t0 can be set equal

Numerical Linear Algebra for Signals Systems and Control
Numerical Linear Algebra for. Signals Systems and Control. Paul M. Van Dooren. University of Louvain, B-1348 Louvain-la-

Linear Control Systems (LCSL)
Jul 31, 2016 - Linear Control Systems specialise in the design, installation & maintenance of bespoke Building Energy Ma