Additional Notes for BI3SS16
William Harwin

Background

This is part of the module BI3SS that includes state space and frequency methods of system analysis.

The statespace lectures of this module will be loosely based on the power point slides of Prof Paul Sharkey (avaiable on blackboard). This file has additional necessary material. It is important you attend lectures as key material will be distributed and discussed only during said lectures. Assignement details will also be discussed during lectures.

This file should also be available at http://www.cybernetia.co.uk/LN/state_space.html and http://www.reading.ac.uk/~shshawin/LN

Additional information is available at http://www.cybernetia.co.uk/dnb/sspace_important.pdf

Assignments

Teaching plan

Week 1

Blackboard Files

Week 2

Blackboard Files

Week 3

Blackboard Files

Week 4

Blackboard Files

Week 5

State-space equations

General form for a state-space with states $\vec{x}(t)$, inputs $\vec{u}(t)$ and outputs $y(t)$ is

\begin{align*} \dot{\vec{x}}&=f(\vec{x},\vec{u},t)\\ \vec{y}&=g(\vec{x},\vec{u},t) \end{align*} with input $\vec{u}$, output $\vec{y}$ and states $\vec{x}$. All variables are assumed to be a function of $t$.

Background mathematics

Euler's formula

\[ e^{\pm j Q}=\cos(Q)\pm j\sin(Q) \]

Laplace

Laplace transform of $L(\dot x)=sx-x(0)$

Solution to ODEs are of the form y=CF+PI where CF is the complimentary function and PI is the particular integral. This is essentially the two components of the Laplace transform of $\dot{x}$ above.

Laplace transform of a matrix exponential

The definition of the Laplace transform is \[ \mathcal{L}[f(t)](s)=\int_0^\infty f(t)e^{-st} dt \] We can extend this to matricies and use it to compute the Laplace transform of $e^{At}$ where $A$ is a matrix. So \[ \mathcal{L}[e^{At}]=\int_0^\infty e^{At}e^{-st} dt \] Since $sIA=AsI$ we can the move the matrix multiplication into the exponent so that \[ \mathcal{L}[e^{At}]=\int_0^\infty e^{(A-sI)t} dt \] On integration this becomes \[ \mathcal{L}[e^{At}]=(A-sI)^{-1}e^{(A-sI)t} \Big|_0^\infty \] or since these matrices must be commutative \[ \mathcal{L}[e^{At}]=e^{(A-sI)t}(A-sI)^{-1} \Big|_0^\infty \] It is easy to show that $e^{(A-sI)t}=I$ when $t=0$. More research is needed to demonstrate that $e^{(A-sI)t}=\emptyset$ as $t\rightarrow\infty$ however if this proves to be the case then

\[ \mathcal{L}[e^{At}]=(sI-A)^{-1} \]

characteristic polynomial

The characteristic polynomial of a square matrix $A$ is defined as

\[ p(\lambda)=\hbox{det}(\lambda I-A)=|\lambda I-A| \]

The Eigenvalues of A are simply the roots of $p(t)$

Eigenvalues and determinants.

If we have a problem of the form $(\lambda I-A)\vec{v}=0$ or $W\vec{v}=0$ where $W=(\lambda I-A))$ is a square matrix and $\vec{v}$ is a column vector, there are two solutions. The first $\vec{v}=0$ is trivial, the second requires the matrix $W$ to be singular. We can see this by assuming that an inverse to $W$ exists. If so then we could write $\vec{v}=W^{-1}[0]$. ($[0]$ is a null column vector). Since we know that we have a non trivial solution to $\vec{v}$ $W$ cannot have an inverse, thus is singular.

So to compute the solutions to the equation $(\lambda I-A)v=0$ requires we compute the values of $\lambda$ for which the determinate $|\lambda I-A)|$ is 0.

Since there are $n$ solutions for an $n\times n$ matrix $A_n$ we can put all the solutions into a handy form by creating a diagonal matrix of the Eigenvalues \[D=\begin{bmatrix}\lambda_1 & 0 & \dots & 0\\ 0 & \lambda_2 & \dots &0 \\ \vdots & \vdots & \ddots& \vdots\\ 0 & 0 & \dots &\lambda_n\end{bmatrix}\] and an Eigenvector matrix where each Eigenvector is a column. That is \[ V=\begin{bmatrix}\vec{v}_1&\vec{v}_2&\dots&\vec{v}_n\end{bmatrix} \]

We can then write the full set of Eigenvalues and Eigenvectors as a matrix decomposition

\[A V = V D\]

Matrix exponents

For a constant matrix $A$ and $B$, and for scalars $a$, $b$ and $t$ $$\displaystyle \frac{d e^{At}}{dt}=Ae^{At}=e^{At}A$$ $$\displaystyle \int e^{At}{dt}=A^{-1}e^{At}=e^{At}A^{-1}$$ $$\displaystyle e^{A(a+b)}=e^{Aa}e^{Ab}=e^{Ab}e^{Aa}$$ Where $AB=BA$ only then does the following hold $$\displaystyle e^{(A+B)}=e^{A}e^{B}=e^{B}e^{A}$$ For the matrix decomposition $A = V D V^{-1}$ (see Eigenvalues above) the matrix exponent becomes $$\displaystyle e^{A}=Ve^{D}V^{-1}$$ The matrix exponent of a diagonal matrix is simply the diagonal matrix of the diagonal exponents (this is relatively easy to demonstrate)

Controller canonical form

The controller canonical form can be pictured as a chain of integrators where each state is the integral of another state. Usually it is represented with the final row of the matrix containing the information relating directly to the denominator of the transfer function.

So for example a four state single input state-space representation might be \[ \begin{bmatrix} \dot{x}_{1}\\ \dot{x}_{2}\\ \dot{x}_{3}\\ \dot{x}_{4} \end{bmatrix} =\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ - a_{1} & - a_{2} & - a_{3} & - a_{4} \end{bmatrix} \begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4} \end{bmatrix} +\begin{bmatrix} 0\\ 0\\ 0\\ b_{4} \end{bmatrix}u \]

This is the state-space representation of the transfer function between the input and state variable $x_1$ is \[ G(s)=\frac{x_1}{u}=\frac{b_4}{s^4+a_4s^3+a_3s^2+a_2s+a_1} \]

Any system of this form will be controllable so long as all the final row values are non zero.

As a confirmation we can form up the controllability matrix and check its rank. The controlability matrix is \[ V_c=\begin{bmatrix} 0 & 0 & 0 & b_{4}\\ 0 & 0 & b_{4} & - a_{4}\, b_{4}\\ 0 & b_{4} & - a_{4}\, b_{4} & {a_{4}}^2\, b_{4} - a_{3}\, b_{4}\\ b_{4} & - a_{4}\, b_{4} & {a_{4}}^2\, b_{4} - a_{3}\, b_{4} & a_{4}\, \left(a_{3}\, b_{4} - {a_{4}}^2\, b_{4}\right) - a_{2}\, b_{4} + a_{3}\, a_{4}\, b_{4} \end{bmatrix} \] and by inspection the determinate is $b_4^4$ which will be non zero for any nonzero $b_4$ so in this case the rank is 4 and all the states are therefore controllable.

You can use Matlabs symbolic toolbox to explore the relationship between controlability and the control canonical form

Set up some variables and construct a SISO state-space system in control canonical form

>> syms a_1 a_2 a_3 a_4 b_4
>> A=[zeros(3,1) eye(3); -a_1 -a_2 -a_3 -a_4]
>> B=[zeros(3,1); b_4]
>> len=length(A);

Form the controllability matrix

>> Cm=B;for jj=2:len;Cm(:,jj)=A*Cm(:,jj-1);end
>> Cm

Check the determinate and the rank

>> rank(Cm)
>> det(Cm)

Observer canonical form

A related form to the controller canonical form is the observer canonical form. Here the output can be represented by a cascade of integrators. The specific forms for the $A$ and $C$ matrix are as follows. \[ A= \begin{bmatrix} 0 & 0 & 0 & - a_{1}\\ 1 & 0 & 0 & - a_{2}\\ 0 & 1 & 0 & - a_{3}\\ 0 & 0 & 1 & - a_{4} \end{bmatrix} \]

\[C=\begin{bmatrix} 0 & 0 & 0 & c_{4} \end{bmatrix}\]

Solution to the linear state-space equation

If we consider just the dynamics of a states of a linear system defined in state-space then the key equation is \[ \dot{\vec{x}}(t)=A\vec{x}(t)+B\vec{u}(t) \] Where $\vec{x}(t)$ and $\vec{u}(t)$ are both functions of time. We can rewrite this equation as \[ \frac{d{\vec{x}}(t)}{dt}-A\vec{x}(t)=B\vec{u}(t) \] and then multiply both sides by an integrating factor $e^{-At}$ where $A$ is the state matrix as before. Thus \[ e^{-At}\frac{d\vec{x}(t)}{dt}-e^{-At}A\vec{x}(t)=\frac{d}{dt}e^{-At}\vec{x}(t)=e^{-At}B\vec{u}(t) \] We can now separate variables $x$ and $u$ and integrate. We will also rename the time variable $t$ as $\tau$ to allow us to reuse the symbol $t$ later on. Thus integrating both sides from $0$ to $t_1$ gives \[ \int_0^{t_1} d\left(e^{-A\tau}\vec{x}(\tau)\right)=\int_0^{t_1} e^{-A\tau}B\vec{u}(\tau) d\tau \]

The integral on the left side of the = is easy to compute so

\[ e^{-A\tau}\vec{x}(\tau)\Big|_{\tau=0}^\infty=\int_0^{t_1} e^{-A\tau}B\vec{u}(\tau) d\tau \] that is \[ e^{-At_1}\vec{x}(t_1)-\vec{x}(0)=\int_0^{t_1} e^{-A\tau}B\vec{u}(\tau) d\tau \] Finally we can rearrange this equation and change $t_1$ back to $t$ to get \[ \vec{x}(t)=e^{At}\vec{x}(0)+e^{At}\int_0^{t} e^{-A\tau}B\vec{u}(\tau) d\tau \] or \[ \vec{x}(t)=e^{At}\vec{x}(0)+\int_0^{t} e^{A(t-\tau)}B\vec{u}(\tau) d\tau \] Of course if there is no input $\vec{u}$ or if all the elements of $B$ are zero then the simpler form of the solution is apparent as \[ \vec{x}(t)=e^{At}\vec{x}(0) \]

Examples

Linear : Mass spring damper

\[ \dot{\vec{x}} =\begin{bmatrix}0&1\\-k/m&-b/m\end{bmatrix}\vec{x} +\begin{bmatrix}0\\1/m\end{bmatrix}\vec{u} \] where $x_1$ is a position variable and $x_2$ is a velocity variable. i.e. \[\vec{x}=\begin{bmatrix}r\\v\end{bmatrix} \] where $r$ is the position variable and $v$ is the velocity variable. $u$ would be a scalar force $f$ applied to the mass-spring-damper system.

Try expanding the matrix to get the classic equation

Non-linear : Van der Pol oscillator

The nonlinear state equations are \begin{align*} \dot{x}_1=&x_2\\ \dot{x}_2=&-x_1+\mu(1-x_1^2)x_2 \end{align*} We can make them look linear by writing \[ \dot{\vec{x}}=\begin{bmatrix}0&1\\-1&\mu(1-x_1^2)\end{bmatrix}\vec{x} \]

References

[Franklin15:feedback_Control] Franklin, J.D. Powell and A. Emami-Naeini, Feedback Control of Dynamic Systems 2015

[Ogata2002modern] K Ogata, Modern Control Engineering 2002 629.8-OGA - 1970 edition

[Ogata1995discrete:Ch5] Katsuhiko Ogata, Discrete-time control systems 1995