Differential and Difference Equations¶
The Nature of Mathematical Modeling (draft)¶
$ \def\%#1{\mathbf{#1}} \def\mat#1{\mathbf{#1}} \def\*#1{\vec{#1}} \def\ve#1{\vec{#1}} \def\ds#1#2{\cfrac{d #1}{d #2}} \def\dd#1#2{\cfrac{d^2 #1}{d #2^2}} \def\ps#1#2{\cfrac{\partial #1}{\partial #2}} \def\pp#1#2{\cfrac{\p^2 #1}{\p #2^2}} \def\p{\partial} \def\ba{\begin{eqnarray}} \def\ea{\end{eqnarray}} \def\eps{\epsilon} \def\del{\nabla} $
Ordinary Differential Equations¶
Order¶
- Differential equations relate derivatives of functions. The order is the highest derivative that occurs.
- A first-order equation can be immediately solved by integration:
Ansatz¶
- Differental equation solutions exist and are unique, so that if you find a solution it is the solution
- Ansatz is a German word meaning approach or estimate
- An ansatz can be used to solve a differental equation by guessing a form for the solution and then adjusting parameters; if it works then it is the solution
Linear Constant Coefficient¶
A linear constant coefficient differential equation is:
$$\frac{d^Ny}{dt^N}+A_1\frac{d^{N-1}y}{dt^{N-1}}+\cdots+A_{N-1}\ds{y}{y}+A_Ny = f(t)$$These are frequently encountered, in part because they occur naturally, and in part because they are easy to solve
Trying an ansatz $y = e^{kt}$ in the homogeneous problem $f(t)=0$:
$$e^{kt}\left(k^N+A_1k^{N-1}+\cdots+A_{N-1}k+A_N\right) = 0$$That's solved by the characteristic equation:
$$k^N+A_1k^{N-1}+\cdots+A_{N-1}k+A_N = 0$$This is an $N$th order polynomial that will have $N$ roots. The real part of the roots represent exponentially growing or decaying solutions, and the complex part oscillatory behavior. If the roots are distinct:
$$k^N+A_1k^{N-1}+\cdots+A_{N-1}k+A_N = (k-k_1)(k-k_2)\cdots (k-k_N)$$then the general solution is:
$$y = \sum_{n=1}^N C_n e^{k_nt}$$The $C_n$ are determined from the boundary conditions. In an initial value problem those are the value of the $y$ and its derivates; in a boundary value problem they've given at the beginning and end of an internval. If the roots are not distinct, the $M$ solutions for an $M$th order root are:
$$y = \left(C_1+C_2t+C_3t^2+\ldots C_Mt^{M-1}\right)e^{k_M}t$$The complete solution is the sum of the general solution (from the homogeneous problem) and a particular solution to the inhomogeneous problem ($f(t) \ne 0$).
- As an example, consider a simple RC circuit:
import matplotlib.pyplot as plt
import numpy as np
scale = .6
width = 7*scale
height = 3*scale
line = width/3
marker = 1.5*width
font = 3*width
fig,ax = plt.subplots(figsize=(width,height))
ax.axis([0,width,0,height])
ax.set_axis_off()
def resistor(x,y,width):
x = scale*x
y = scale*y
width = scale*width
dx = width/16
dy = 2.5*dx
xplot = np.linspace(x-width/2,x+width/2,17)
yplot = y+np.array([0,0,0,dy,0,-dy,0,dy,0,-dy,0,dy,0,-dy,0,0,0])
plt.plot(xplot,yplot,'k-',linewidth=line)
def capacitor(x,y,width):
x = scale*x
y = scale*y
width = scale*width
dy = width/2.5
xplot = [[x-width,x-width,x,x],[x+width,x+width,x,x]]
yplot = [[y+dy,y-dy,y+dy,y-dy],[y+dy,y-dy,y+width/2,y-width/2]]
plt.plot(xplot,yplot,'k-',linewidth=line)
def ground(x,y,width):
x = scale*x
y = scale*y
width = scale*width
xplot = [[x-.8*width,x-.55*width,x-.3*width,x],[x+.8*width,x+.55*width,x+.3*width,x]]
yplot = [[y+.3*width,y,y-.3*width,y+.3*width],[y+.3*width,y,y-.3*width,y+.5*width]]
plt.plot(xplot,yplot,'k-',linewidth=line)
def wire(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
plt.plot([x0,x1],[y0,y1],'k-',linewidth=line)
def point(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker)
def circle(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker,markeredgewidth=width/2,
markerfacecolor='white',markeredgecolor='black')
resistor(2,2.25,2)
wire(.75,2.25,1,2.25)
circle(.75,2.25)
wire(3,2.25,4,2.25)
circle(4,2.25)
point(3.25,2.25)
capacitor(3.25,1.25,.5)
wire(3.25,2.25,3.25,1.5)
ground(3.25,.25,.5)
wire(3.25,1,3.25,.5)
def text(x,y,text):
x = scale*x
y = scale*y
ax.text(x,y,text,
ha='center',va='center',
math_fontfamily='cm',
fontsize=font,color='black')
text(4,1.25,'$C$')
text(4.5,2.25,'$V_o$')
text(.3,2.25,'$V_i$')
text(2,1.6,'$R$')
plt.show()
The current through the resistor and capacitor must match:
$$C\dot{V}_o = \cfrac{V_i-V_o}{R}$$therefore:
$$RC\dot{V}_o+V_o = V_i$$The characteristic equation is:
$$ \ba RCr+1 &=& 0 \\ r &=& \cfrac{-1}{RC} \ea $$therefore the undriven response is a decaying exponential:
$$V_o = Ae^{-t/RC}$$To find the driven response, assume periodic forcing $V_i = \exp(i\omega t)$ (where ...). Trying an ansatz $V_o = A\exp(i\omega t)$ shows that:
$$ \ba RCi\omega+A &=& 1\\ A &=& \cfrac{1}{1+i\omega RC} \ea $$At low frequencies the output is equal to the input; at high frequencies it rolls off as $1/\omega$ (it is a low-pass filter) and is out of phase by $90^\circ$. Problem 1 covers the important example of a damped, driven harmonic oscillator.
- SymPy solution:
import sympy as sp
from sympy import I
from sympy.abc import R,C,t,omega
from IPython.display import display,Math
V = sp.Function('V')
#
diffeq = R*C*V(t).diff(t)+V(t)
diffeq = sp.Eq(R*C*V(t).diff(t)+V(t),0)
soln = sp.dsolve(diffeq,V(t))
display(Math(f"{sp.latex(diffeq)}\\Rightarrow~{sp.latex(soln)}"))
#
diffeq = sp.Eq(R*C*V(t).diff(t)+V(t),sp.exp(I*omega*t))
soln = sp.dsolve(diffeq,V(t))
display(Math(f"{sp.latex(diffeq)}\\Rightarrow~{sp.latex(soln)}"))
Systems of differential equations¶
An $N$th-order linear differential equation can be written as a first-order equation for an $N$-dimensional vector:
$$ \frac{d}{dt} \left(\begin{array}{c} y_0 \\ y_1 \\ \vdots \\ y_{N-2} \\ y_{N-1} \end{array}\right) = \left( \begin{array}{c c c c c} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ -A_N & -A_{N-1} & -A_{N-2} & \cdots & -A_1 \end{array}\right) \left(\begin{array}{c} y_0 \\ y_1 \\ \vdots \\ y_{N-2} \\ y_{N-1} \end{array}\right) + \left(\begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \\ f(t) \end{array} \right) $$$$ \ds{\*y}{t} = \%A \*y + \*f $$This does not make the problem any simpler (it requires solving the same characteristic equation), but it can be convenient to simplify notation by using a vector first-order equation [Gershenfeld:83].
This is a simple example of a system of differential equations. Such systems also arise whenever there are interactions; an important special case is an unforced, undamped system of masses with coordinates $(y_1, y_2, \ldots, y_N) \equiv \ve{y}$ that have a restoring force that is an arbitrary linear combination of their positions. The corresponding vector equation is:
$$ \dd{\ve{y}}{t} + \mat{A} \cdot \ve{y} = 0 ~~ . $$Diagonalization¶
If the coupling matrix $\mat{A}$ is diagonal ($A_{ij} = 0$ for $i \ne j$) then the oscillators will be independent, but if it isn't then they won't. Let's look for a new set of variables $\ve{z} \equiv \mat{M}^{-1}\cdot\ve{y}$, defined by an unknown transformation $\mat{M}$, for which these equations decouple:
$$ \dd{\ve{z}}{t} + \mat{D} \cdot \ve{z} = 0 ~~ , $$where $\mat{D}$ is a diagonal matrix. The required transformation $\mat{M}$ can be found by changing variables:
$$ \ba \dd{\ve{y}}{t} + \mat{A} \cdot \ve{y} &=& 0 \\ \mat{M} \cdot \dd{\ve{z}}{t} + \mat{A} \cdot \mat{M} \cdot \ve{z} &=& 0 \\ \dd{\ve{z}}{t} + \mat{M}^{-1} \cdot \mat{A} \cdot \mat{M} \cdot \ve{z} &=& 0 \\ \dd{\ve{z}}{t} + \mat{D} \cdot \ve{z} &=& 0 \ea $$and so $\mat{M}^{-1} \cdot \mat{A} \cdot \mat{M} = \mat{D}$ or $\mat{A} \cdot \mat{M} = \mat{M} \cdot \mat{D}$. If the $n$th eigenvector of $\mat{A}$ is $\vec{v}_n$, with corresponding eigenvalue $\mat{A}\cdot\vec{v}_n= \vec{v}_n\lambda_n$, then if $\mat{M}=[\vec{v}_1 \cdots \vec{v}_N]$ has the eigenvectors for columns:
$$ \ba \mat{A}\cdot\mat{M} &=& \mat{A}\cdot [\vec{v}_1 \cdots \vec{v}_N] \\ &=& [\vec{v}_1 \cdots \vec{v}_N] \cdot \left[ \begin{array} {ccc} \lambda_1 & & \\ & \ddots & \\ & & \lambda_N \end{array}\right] \\ &=& \mat{M}\cdot\mat{D} \ea $$therefore it will diagonalize $\mat{M}$. The new variables here are called normal modes and behave exactly like independent oscillators. There will be as many normal modes as there are degrees of freedom, unless there are fewer distinct eigenvectors because of degenerate eigenvalues. Problem 2 finds the normal modes for a simple system.
Laplace Transforms¶
Using the characteristic equation to solve a differential equation requires separate steps to find the general solution, search for a particular solution, and solve for the coefficients to match the boundary conditions. Laplace transforms provide a convenient alternative, turning many differential equations into an algebraic problem and giving the complete solution in a single step.
The one-sided Laplace transform of a function $f(t)$ is defined by
$$ {\cal L}\{f(t)\} \equiv F(s) = \int_0^\infty e^{-st}f(t)~dt $$If the integral extended from $-\infty$ to $\infty$ this would be the two-sided Laplace transform. The one-sided transform explicitly includes the initial conditions of the system at $t=0$, and for this reason we will use it; the two-sided transform is used for steady-state problems for which the initial conditions do not matter.
The Laplace transform is a generalization of the Fourier transform to an arbitrary complex argument. Its usefulness for differential equations comes from recognizing that differentiation just multiplies that Laplace transfrom by $s$:
$$ \ba {\cal L} \left\{ \frac{df(t)}{dt} \right\} &=& \int_0^\infty e^{-st} \frac{df(t)}{dt}~dt \\ &=& e^{-st} f(t) \bigg{|}_0^\infty + s\int_0^\infty e^{-st}f(t)~dt \\ &=& sF(s) - f(0) \ea $$where the second step follows by integrating by parts
$$ \int_A^B u~dv = uv\bigg{|}_A^B - \int_A^B v~du $$Similarly, for second derivatives
$$ \ba {\cal L} \left\{ \frac{d^2f(t)}{dt^2} \right\} &=& \int_0^\infty e^{-st} \frac{d^2f(t)}{dt^2}~dt \\ &=& e^{-st} \frac{df(t)}{dt} \bigg{|}_0^\infty + s\int_0^\infty e^{-st} \frac{df(t)}{dt}~dt \\ &=& s^2 F(s) - sf(0) - \frac{df(0)}{dt} \ea $$and by induction for higher derivatives
$$ \ba {\cal L}\left\{\frac{d^Nf(t)}{dt^N}\right\} &=& s^N F(s) - s^{N-1}f(0) - s^{N-2}\frac{df(0)}{dt} \\ & & - s^{N-3}\frac{d^2f(0)}{dt^2} - \cdots - \frac{d^{N-1}f(0)}{dt^{N-1}} \ea $$There is a corresponding relationship for integrals:
$$ \ba {\cal L}\left\{\int_0^tf(u)~du\right\} &=& \int_0^\infty e^{-st}\int_0^t f(u)~du~dt \\ &=& -\frac{e^{-st}}{s}\int_0^tf(u)~du\bigg{|}_0^\infty + \frac{1}{s}\int_0^\infty e^{-st}f(t)~dt \\ &=& \frac{1}{s} F(s) \ea $$The Laplace transform turns differential and integral equations into algebraic equations.
In order to solve a differential equation by using Laplace transforms, the steps are
- Laplace transform the differential equation into an algebraic equation, including the initial conditions.
- Solve this new equation for the unknown function in terms of the transform variable~$s$.
- Find the inverse transform.
There is not an automatic way to invert a Laplace transform. The easiest approach, when it works, is to look up the inverse in a table of Laplace transforms. Many more problems can be handled by first doing a partial fraction to simplify them. The transform $F(s)$ of a constant-coefficient linear differential equation will be the ratio of two polynomials
$$ F(s) = \frac{N(s)}{D(s)} $$called a rational function. The roots of the numerator $N(s)$ are called zeros, and the roots of the denominator $D(s)$ are called poles. Let the poles be $\{p_i\}_{i=1}^N$. The partial fraction expansion of an $F(s)$ with distinct poles and a numerator of lower degree than the denominator is
$$ F(s) = \sum_{i=1}^N \frac{A_i}{s-p_i} $$where
$$ A_i = \lim_{s\rightarrow p_i} F(s)(s-p_i) $$The definition of the coefficients can be verified by substitution:
$$ \left. F(s)(s-p_i) \right|_{s=p_i} = \left. A_i + \sum_{j\ne i}^N A_j \frac{s-p_i}{s-p_j} \right|_{s=p_i} = A_i $$For repeated poles $p_n$ with a multiplicity $n$, the terms in the partial fraction expansion are of the form
$$ F(s) = \frac{A_n}{(s-p_n)^n} + \frac{A_{n-1}}{(s-p_n)^{n-1}} + \cdots + \frac{A_1}{s-p_n} $$with
$$ A_n = \lim_{s\rightarrow p_n} F(s)(s-p_n)^n $$Let's return to the simple example of an RC circuit:
$$RC\dot V_o(t) + V_o(t) = V_i(t)$$$$RC[sV_o(s) - V_o(0)] + V_o(s) = V_i(s)$$$$V_o(s) = \frac{V_i(s)}{1+sRC} + \frac{RCV_o(0)}{1+sRC}$$$$V_o(t) = (RC)^{-1} e^{-t/RC} * V_i(t) + V_o(0) e^{-t/RC}$$We see immediately that the output is the sum of the transient decay of the initial state and the convolution of the exponential decay with the input. If as before we take the forcing to be $\exp(i\omega t)$,
$$V_o(s) = \frac{1}{1+sRC}\frac{1}{s-i\omega} + \frac{RCV_o(0)}{1+sRC}$$$$V_o(t) = \frac{1}{1+i\omega RC}\left(e^{i\omega t}-e^{-t/RC}\right) + V_o(0) e^{-t/RC}$$We did not see the second part of the first term before because we did not worry about satisfying the initial conditions; here it arises naturally from the use of the Laplace transform.
The output $y(t)$ from an arbitrary causal linear time-invariant system (one in which the future is uniquely determined by the past, and the equations do not change over time, such as a constant-coefficient linear differential equation) is related to the input $x(t)$ by convolution with respect to the impulse response $h(t)$ (the response of the system to a delta-function input)
$$ y(t) = \int_0^t h(\tau) x(t-\tau) ~ d\tau $$Since convolution in the time domain equals multiplication of Laplace transforms,
$$ H(s) = \frac{Y(s)}{X(s)} $$$H(s)$, the Laplace transform of the impulse response, is called the system transfer function and is equal to the ratio of the Laplace transforms of the input and the output.
A general transfer function for a finite-dimensional linear system will be the ratio of two polynomials. Since the polynomials can be constructed from knowledge of the roots, the location of the poles and zeros completely characterizes the response of the system. A great deal therefore can be learned about a system from the placement of its poles and zeros. Most importantly, for the system to be globally stable, all of the poles must lie in the left half--plane ${{\rm Re}(p_i) < 0}$ (recall that ${\cal L}\{\exp(at)\} = 1/(s-a)$, and so if ${\rm Re}(a) > 0$ then the solution will diverge). Similarly, poles off of the real axis are associated with oscillatory solutions.
If the input is $x=\exp(i\omega t)$, then the asymptotic output is
$$ \ba y(t) &=& \lim_{t\rightarrow\infty} \int_0^t h(\tau)x(t-\tau)~d\tau \n\\ &=& \lim_{t\rightarrow\infty} \int_0^t h(\tau)e^{i\omega (t-\tau)}~d\tau \n\\ &=& \lim_{t\rightarrow\infty} e^{i\omega t} \int_0^t h(\tau)e^{-i\omega \tau}~d\tau \n\\ &=& e^{i\omega t} H(i\omega) \ea $$The steady-state output (after any initial transients) is equal to the input multiplied by the transfer function evaluated at $i\omega$.
Perturbation Expansions¶
One is much more likely to encounter a differential equation that is not analytically soluble than one that is. If the problem is related to one that does have an analytical solution, it can be possible to find an approximate solution by an expansion around the known one. The key is to recognize a small parameter $\eps$ in the problem that in the limit $\eps\rightarrow 0$ reduces to the known problem. Equally useful is a very large parameter, since its inverse can be used to develop an approximation.
For example, consider a harmonic oscillator with a weak nonlinear dissipation
$$ \ddot x + x + \eps \dot x^2 = 0 $$Clearly, when $\eps$ is small the solution will not be too far from simple harmonic motion. As an ansatz we will expand the solution in powers of the small parameter:
$$ x(t) = x_0(t) + \eps x_1(t) + \eps^2 x_2(t) + {\cal O}(\eps^3) $$The notation ${\cal O}(\eps^3)$ stands for terms of order $\eps^3$ and higher. Now, plug this in and collect terms based on their order. All of the terms with the same order of $\eps$ must satisfy the differential equation independently of the others, because their relationship will change if $\eps$ is arbitrarily varied:
$$ \ddot x_0 + x_0 + \eps\dot x_0^2 + \eps\ddot x_1 + \eps x_1 + \eps^3\dot x_1^2 + \cdots =0 $$$$ {\cal O}(\eps^0): \ddot x_0 + x_0 = 0 $$$$ {\cal O}(\eps^1): \ddot x_1 + x_1 + \dot x_0^2 = 0 $$and so forth. The lowest-order equation is just that for the unperturbed oscillator, and the higher-order equations give corrections. This is a hierarchy that can be solved in order, first finding $x_0$, then using it to find $x_1$, and on up to the desired order of approximation. If the initial conditions are chosen so that $x_0 = \exp(it)$, then this gives for $x_1$
$$ \ddot x_1 + x_1 = e^{i2t} $$The homogeneous solution is $A\exp(it) + B\exp(-it)$, and a particular solution can be found by plugging in $x_1 = C\exp(i2t)$, which gives $C = -1/3$. The nonlinearity has to first order introduced a harmonic into the system, a familiar feature of strongly driven systems.
Perturbation approximations are useful if there is a natural notion of a small deviation from a soluble problem.
Difference Equations¶
It is a common and usually reasonable approximation to consider physical systems to be continuous, and so differential equations apply. But this is not the case for digital systems, which usually are discretized in time (there is a system clock). Conveniently, the theory of discrete time equations is essentially identical to that for differential equations.
Take $y(k)$ to be a series defined only at integer values of the time $k$ (replace $k$ with $k\Delta t$ throughout this section if the time step is not an integer). An $N$-th order linear constant coefficient difference equation for $y(k)$ is
$$ L_N(y) \equiv y(k) + A_1 y(k-1) + \cdots + A_{N-1} y(k-N+1) + A_N y(k-N) = f(k) $$Substituting the ansatz $y(k) = r^k$ into the homogeneous equation ($f(k)=0)$ and cancelling the $r^k$ terms gives the same characteristic equation we saw for differential equations,
$$ r^N + A_1 r^{N-1} + \cdots + A_{N-1} r + A_N = 0 $$Once again, if all of the roots of this $N$-th order polynomial are distinct
$$ r^N + A_1 r^{N-1} + \cdots + A_{N-1} r + A_N = (r-r_1) (r-r_2) \cdots (r-r_N) $$then the general solution is
$$ y_g = \sum_{n=1}^N C_n r_n^k $$where the $C_n$'s are unknown coefficients determined by the boundary equations. A complete solution is made up of this general solution plus a particular solution to the inhomogeneous problem.
If some of the roots of the characteristic polynomial are repeated,
$$ r^N + A_1 r^{N-1} + \cdots + A_{N-1} r + A_N = (r-r_1)^M (r-r_{M+1}) \cdots (r-r_N) $$then recognizing that the operations of differentiation and shifting in time can be interchanged, and repeating the preceding argument we see that the extra solutions associated with the repeated roots are
$$ \ds{}{r} L_N (r^k) = L_N \left( \ds{r^k}{r} \right) = L_N (k r^{k-1}) = 0 $$$$ \dd{}{r} L_N (r^k) = L_N \left(\dd{}{r} r^k \right) = L_N (k(k-1)r^{k-2}) = 0 $$and so forth.
z-Transforms¶
For a series $y(k)$, the one-sided z-transform} is defined by
$$ {\cal Z}\{y(k)\}\equiv Y(z) = \sum_{k=0}^\infty y(k) z^{-k} $$As with the Laplace transform, a two-sided transform can also be defined for signals that start at $-\infty$, extending the discrete Fourier transform to the complex plane.
Just as differentiation is the most important property of Laplace transforms, time-shifting is the most important property for $z$-transforms. Consider a series delayed by one time step:
$$ \ba {\cal Z}\{y(k-1)\} &=& \sum_{k=0}^\infty y(k-1) z^{-k} \n\\ &=& \sum_{k'=-1}^\infty y(k') z^{-k'}z^{-1} ~~~~ (k'=k-1) \\ &=& z^{-1} \sum_{k'=0}^\infty y(k') z^{-k'} + z^{-1} y(-1) z \\ &=& z^{-1} Y(z) + y(-1) ~~. \ea $$Similarly, for a delay of two,
$$ \ba {\cal Z}\{y(k-2)\} &=& \sum_{k=0}^\infty y(k-2) z^{-k} \\ &=& \sum_{k'=-2}^\infty y(k') z^{-k'}z^{-2} ~~~~ (k'=k-2) \\ &=& z^{-2} \sum_{k'=0}^\infty y(k') z^{-k'} + z^{-2} y(-1) z + z^{-2} y(-2) z^2 \\ &=& z^{-2} Y(z) + z^{-1} y(-1) + y(-2) \ea $$and so forth for longer delays. The $z$-transform turns a difference equation into an algebraic equation.
Many of the properties of the Laplace transform carry over to the $z$-transform. If the input forcing to a constant-coefficient linear difference equation is a unit impulse $x(k)=\delta(k)~~(\delta(0)=1,~\delta(k\ne 0) = 0$), then the solution $y(k) = h(k)$ is defined to be the impulse response of the system. The output for an arbitrary input is given by the convolution with the impulse response
$$ y(k) = \sum_{n=0}^k h(n) x(k-n) $$and in the $z$ domain the transfer function (the $z$-transform of the impulse response) is the ratio of the transforms of the input and the output
$$ H(z) = \frac{Y(z)}{X(z)} $$The stability of a system is determined by the location of the poles of the transfer function; for it to be stable for any bounded input signal the poles must have a complex magnitude of less than 1 (recall that ${\cal Z}\{a^k\} = z/(z-a)$, which has a pole at $a$, so $|a|$ must be less than 1 for $a^k$ to remain bounded).
The frequency response can be found with a calculation similar to the continuous time case. If the input is $x(k)=\exp(i\omega\delta_t k)$, then the asymptotic output is
$$ \ba y(k) &=& \lim_{k\rightarrow\infty} \sum_{n=0}^k h(n) x(k-n) \\ &=& \lim_{k\rightarrow\infty} \sum_{n=0}^k h(n) e^{i\omega\delta_t(k-n)} \\ &=& \lim_{k\rightarrow\infty} e^{i\omega\delta_t k} \sum_{n=0}^k h(n) e^{-i\omega\delta_t n} \\ &=& e^{i\omega\delta_t k} H(e^{i\omega\delta_t}) \ea $$Now it is multiplied by the transfer function evaluated at $\exp(i\omega\delta_t)$. Problem ? looks at the solution for, and frequency response of, a simple digital filter.
Partial Differential Equations¶
Origin¶
In the preceeding chapter we saw how the solution for two coupled harmonic oscillators simplifies into two independent normal modes. What does the solution look like if there are 10 oscillators? $10^{10}$? Are there any simplifications? Not surprisingly, the answer is yes.
Consider an infinite chain of oscillators:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
scale = .6
width = 12*scale
height = 1.7*scale
line = width/5
marker = 1.3*width
font = 2.3*width
fig,ax = plt.subplots(figsize=(width,height))
ax.axis([0,width,0,height])
ax.set_axis_off()
def resistor(x,y,width):
x = scale*x
y = scale*y
width = scale*width
dx = width/16
dy = 2.5*dx
xplot = np.linspace(x-width/2,x+width/2,17)
yplot = y+np.array([0,0,0,dy,0,-dy,0,dy,0,-dy,0,dy,0,-dy,0,0,0])
plt.plot(xplot,yplot,'k-',linewidth=line)
def wire(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
plt.plot([x0,x1],[y0,y1],'k-',linewidth=line)
def point(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker)
resistor(2,1.2,2)
point(3,1.2)
resistor(4,1.2,2)
point(5,1.2)
resistor(6,1.2,2)
point(7,1.2)
resistor(8,1.2,2)
def text(x,y,text):
x = scale*x
y = scale*y
ax.text(x,y,text,
ha='center',va='center',
math_fontfamily='cm',
fontsize=font,color='black')
text(0.6,1.25,'$\\cdots$')
text(2,1.9,'$k$')
text(3,1.7,'$m$')
text(3,0.1,'$y_1$')
text(4,1.9,'$k$')
text(5,1.7,'$m$')
text(5,0.1,'$y_2$')
text(6,1.9,'$k$')
text(7,1.7,'$m$')
text(7,0.1,'$y_3$')
text(8,1.9,'$k$')
text(9.3,1.25,'$\\cdots$')
def arrow(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
ax.annotate('',xy=(x1,y1),xytext=(x0,y0),
arrowprops=dict(color='black',width=marker/7,headwidth=marker,headlength=marker))
arrow(3,0.5,4,0.5)
arrow(5,0.5,6,0.5)
arrow(7,0.5,8,0.5)
plt.show()
The governing equation for the $n$th mass is
$$\ba m \ddot y_n &=& - k(y_n - y_{n+1}) - k(y_n - y_{n-1}) \ddot y_n \\ &=& \frac{k}{m} (y_{n+1} - 2 y_n + y_{n-1}) \\ &=& \underbrace{k~\delta x}_{\displaystyle\tau} ~ \underbrace{\frac{\delta x}{m}}_{\displaystyle 1/\rho} ~ \frac{y_{n+1} - 2 y_n + y_{n-1}}{\delta x^2} \ea$$The two prefactors are the average spring constant $\tau$ and mass density $\rho$ (remember that springs add inversely proportionally), and the final term is just an approximation to the second spatial derivative:
$$\ba \frac{\partial y}{\partial x} &\approx& \frac{y_{n-1}-y_n}{\delta x} \frac{\partial^2 y}{\partial x^2} \\ &\approx& \frac{1}{\delta x} \left[ \frac{y_{n+1}-y_n}{\delta x} - \frac{y_{n}-y_{n-1}}{\delta x} \right] \\ &=& \frac{y_{n+1} - 2 y_n + y_{n-1}}{\delta x^2} \ea$$Therefore in the limit of a small spacing between the springs, the system of ordinary differential equations for a chain of harmonic oscillators reduces to a single partial differential equation
$$\frac{\p^2 y}{\p t^2} = \frac{\tau}{\rho} \frac{\p^2 y}{\p x^2}$$This equation is solved by a travelling wave. To see this, substitute a general solution $y = f(x+ct)$:
$$\ba c^2f'' &=& \frac{\tau}{\rho} f'' \\ c &=& \pm \sqrt{\frac{\tau}{\rho}} \ea $$This represents an arbitrary disturbance travelling to the right and left with a velocity $c$: the location of the origin (for example) of $f$ is determined by $x+ct=0\Rightarrow x/t=-c$. If there are nonlinearities the velocity will no longer be independent of the shape of the pulse: different wavelengths will travel at different speeds, a phenomenon called \trm{dispersion}{Dispersion}. Note that unlike the case for ODEs, the general solution involves an undetermined function and not just undetermined constants.
Types¶
As with ordinary differential equations, we will immediately specialize to linear partial differential equations, both because they occur so frequently and because they are amenable to analytical solution. A general linear second-order PDE for a field $\phi(x,y)$ is $$ A\frac{\p^2 \phi}{\p x^2} + B\frac{\p^2 \phi}{\p x\p y} + C\frac{\p^2 \phi}{\p y^2} + D\frac{\p \phi}{\p x} + E\frac{\p \phi}{\p y} + F\phi = G $$ where $G(x,y)$ is specified in some portion of the $(x,y)$ plane and the solution must be determined in another portion.
A characteristic of a PDE is a surface across which there can be a discontinuity in the value or derivative of the solution. These define the domains which can be influenced by parts of the boundary conditions, much like the concept of a light cone in the space-time plane of special relativity. The characteristics of equation ??? are determined by the roots of a quadratic polynomial and accordingly can have the form of a hyperbola, a parabola, or an ellipse based on the sign of the discriminant $B^2-4AC$. The standard forms of these three cases define the most common PDEs that we will study:
- $B^2-4AC > 0$ (hyperbolic) $$\del^2\phi = \frac{1}{c^2} \frac{\p^2\phi}{\p t^2}$$
- $B^2-4AC = 0$ (parabolic) $$\del^2\phi = \frac{1}{D} \frac{\p\phi}{\p t}$$
- $B^2-4AC < 0$ (elliptic) $$\del^2\phi = \rho$$
The first of these is a wave equation (like we found for the coupled harmonic oscillators), the second is a diffusion equation (for example, for heat or for ink), and the third is Poisson's equation (or Laplace's equation if the source term $\rho = 0$) and arises in boundary value problems (for example, for electric fields or for fluid flow).
Separation of Variables¶
These three important partial differential equations can be reduced to systems of ordinary differential equations by the important technique of separation of variables. The logic of this technique may be confusing upon first aquaintance, but it rests on the uniqueness of solutions to differential equations: as with ODEs, if you can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution. We will assume as an ansatz that the dependence of the solution on space and time can be written as a product of terms that each depend on a single coordinate, and then see if and how this can be made to solve the problem.
To start, the time dependence can be separated by assuming a solution of the form $\phi(\vec x,t) = \psi(\vec x)T(t)$. There is no time dependence for Laplace's equation; trying this in the diffusion equation gives: $$ T(t)\del^2\psi(\vec x) = \frac{1}{D}\psi(\vec x)\frac{\p T(t)}{\p t} $$ Dividing both sides by $\psi T$ results in no $t$ dependence on the left hand side and no $\vec x$ dependence on the right hand side, so both sides must be equal to some constant because the space and time variables can be varied arbitrarily. By convention, taking this constant to be $-k^2$ gives: $$ \frac{1}{\psi(\vec x)}\del^2\psi(\vec x) = \frac{1}{D} \frac{1}{T(t)} \ds{T}{t} = -k^2 $$ The $t$ equation can immediately be integrated to find: $$ T(t) = Ae^{-k^2Dt} $$ and the $\vec x$ equation is Helmholtz's equation: $$ \del^2\psi(\vec x) + k^2\psi(\vec x) = 0 $$
Similarly, for the wave equation this separation gives: $$ \frac{1}{\psi(\vec x)}\del^2\psi(\vec x) = \frac{1}{c^2}\frac{1}{T} \dd{T}{t} = -k^2 $$ The time equation is solved by: $$ T(t) = A\sin(kct) + B\cos(kct) $$ and the space equation is Helmholtz's equation again.
Coordinate Systems¶
Solving Helmholtz's equation will depend on the coordinate system used for the problem. There are three common ones used in 3D, based on the symmetry of the problem: rectangular, cylindrical, and spherical. Writing the derivative operators in each of these systems is a straightforward exercise in applying the chain rule to the coordinate definitions.
Rectangular¶
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
scale = .6
width = 5*scale
height = 5*scale
line = width/3
marker = 2*width
font = 6*width
fig,ax = plt.subplots(figsize=(width,height))
ax.axis([0,width,0,height])
ax.set_axis_off()
def wire(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
plt.plot([x0,x1],[y0,y1],'k-',linewidth=line)
def point(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker)
def text(x,y,text):
x = scale*x
y = scale*y
ax.text(x,y,text,
ha='center',va='center',
math_fontfamily='cm',
fontsize=font,color='black')
def arrow(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
ax.annotate('',xy=(x1,y1),xytext=(x0,y0),
arrowprops=dict(color='black',width=marker/7,headwidth=marker,headlength=marker))
arrow(1,1,1,4)
arrow(1,1,4,0.25)
arrow(1,1,3.5,2.25)
text(4.25,0.25,'$x$')
text(3.75,2.35,'$y$')
text(1,4.35,'$z$')
plt.show()
Writing the Laplacian $\del^2$ in rectangular coordinates leads to Helmholtz's equation as:
$$\del^2\psi + k^2\psi = \frac{\p^2\psi}{\p x^2}+\frac{\p^2\psi}{\p y^2}+\frac{\p^2\psi}{\p z^2} + k^2\psi = 0$$Assume that $\psi(\vec x) = X(x)Y(y)Z(z)$, substitute this in, and divide by it:
$$\frac{1}{X(x)}\frac{d^2X}{dx^2} + \frac{1}{Y(y)}\frac{d^2Y}{dy^2} + \frac{1}{Z(z)}\frac{d^2Z}{dz^2} + k^2 = 0$$Since each term depends only on $x$, $y$, or $z$, the only way that this equation can hold is if each has a constant value (determined by the boundary conditions) $$\frac{1}{X(x)}\frac{d^2X}{dx^2} = -k_1^2,~ \frac{1}{Y(y)}\frac{d^2Y}{dy^2} = -k_2^2,~ \frac{1}{Z(z)}\frac{d^2Z}{dz^2} = -k_3^2~$$
with $k_1^2+k_2^2+k_3^2 = k^2$. Each of these can be integrated to find:
$$\ba X &=& A_1e^{ik_1x} + B_1e^{-ik_1x} \\ Y &=& A_2e^{ik_2y} + B_2e^{-ik_2y} \\ Z &=& A_3e^{ik_3z} + B_3e^{-ik_3z} \ea$$Multiplying these back together, the spatial solution has the form
$$\psi(\vec x) = Ae^{i\vec k\cdot \vec x}$$with $\vec k\cdot \vec k = k^2$.
As an example, let's return to the 1D wave equation that we found from a chain of harmonic oscillators:
$$\pp{y}{x} = \frac{1}{c^2}\pp{y}{t}$$With the separation $y(x,t) = X(x)T(t)$ this becomes:
$$\dd{T}{t} + c^2k^2 T = 0 ~~~~~~~~ \dd{X}{x} + k^2 X = 0$$solved by:
$$T = A\sin ckt + B\cos ckt ~~~~~~~~ X = C\sin kx + D\cos kx$$We know that the chain must be fixed at the ends ($X(0) = X(L) = 0$). This implies that $D=0$, and that allowable values of the separation constant $k$ are $k_n = n\pi/L$ for integer $n$. Therefore the general solution is:
$$y(x,t) = \sum_n \sin\left(\frac{n\pi}{L}x\right) \left[A_n\sin\left(c\frac{n\pi}{L}t\right) + B_n\cos\left(c\frac{n\pi}{L}t\right)\right]$$These are the normal modes of a string, with the oscillation frequency of each mode proportional to the number of cycles across the string.
Cylindrical¶
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
scale = .6
width = 5*scale
height = 5*scale
marker = 2*width
line = marker/2.5
font = 6*width
fig,ax = plt.subplots(figsize=(width,height))
ax.axis([0,width,0,height])
ax.set_axis_off()
def dashedline(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
plt.plot([x0,x1],[y0,y1],'k--',linewidth=line)
def point(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker)
def text(x,y,text):
x = scale*x
y = scale*y
ax.text(x,y,text,
ha='center',va='center',
math_fontfamily='cm',
fontsize=font,color='black')
def arrow(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
ax.annotate('',xy=(x1,y1),xytext=(x0,y0),
arrowprops=dict(color='black',width=marker/7,headwidth=marker,headlength=marker))
arrow(1,1,1,4)
arrow(1,1,4,0.25)
arrow(1,1,3.5,2.25)
text(4.25,0.25,'$x$')
text(3.75,2.35,'$y$')
text(1,4.35,'$z$')
dashedline(1,1,3,1.25)
dashedline(3,1.25,3,3.5)
point(3,3.5)
text(2.6,0.9,'$\\varphi$')
text(3.3,1.3,'$r$')
text(3.3,3.5,'$z$')
plt.show()
In cylindrical coordinates the Helmholtz equation is:
$$\del^2\psi + k^2\psi = \pp{\psi}{r} + \frac{1}{r}\ps{\psi}{r} + \frac{1}{r^2}\pp{\psi}{\phi} + \pp{\psi}{z} + k^2\psi = 0$$- Once again, try separating by substituting in $\psi = R(r)\Phi(\phi)Z(z)$ and dividing by it:
The terms will cancel if:
$$\ba \frac{1}{\Phi}\dd{\Phi}{\phi} &=& -m^2 \\ \frac{1}{Z}\dd{Z}{z} &=& \alpha^2 - k^2 \\ \frac{1}{R}\left[\dd{R}{r}+\frac{1}{r}\ds{R}{r}\right] - \frac{m^2}{r^2} + \alpha^2 &=& 0 \ea$$for constants $\alpha$ and $m$ (these definitions are conventional). The first equation is easily solved:
$$\Phi = A\sin{m\phi} + B\cos{m\phi}$$For the solution to be single valued $\Phi(\phi+2\pi) = \Phi(\phi)$, and so $m$ must be an integer. The second equation is similarly solved:
$$Z = Ce^{z\sqrt{\alpha^2-k^2}} + De^{-z\sqrt{\alpha^2-k^2}}$$Rewriting the radial equation in terms of $r = \rho/\alpha$,
$$\dd{R}{\rho} + \frac{1}{\rho}\ds{R}{\rho} + \left(1-\frac{m^2}{\rho^2}\right)R = 0$$This is Bessel's equation; its solution is given by Bessel functions:
$$R = E J_m(\alpha r) + F N_m(\alpha r)$$$N_m$ is singular as $r\rightarrow 0$ while $J_m$ is not, so if the solution is finite at the origin $F=0$. If the radial solution must vanish for some $r$ value it is necessary to know where the zeros of $J_m$ occur; these are tabulated in many sources. The lowest ones are:
$$\ba J_0(x)=0 &\Rightarrow& x \approx 2.405, 5.520, 8.654, \ldots \\ J_1(x)=0 &\Rightarrow& x \approx 3.832, 7.016, 10.173, \ldots \\ J_2(x)=0 &\Rightarrow& x \approx 5.136, 8.417, 11.620, \ldots \ea $$If $\alpha=0$, the radial equation becomes:
$$\dd{R}{r} + \frac{1}{r}\ds{R}{r} - \frac{m^2}{r^2}R = 0$$which is solved by
$$R(r) = \left\{\begin{array}{l l} Gr^m+Hr^{-m} & (m \ne 0) \\ G + H\ln r & (m=0) \end{array} \right.$$Spherical¶
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
scale = .6
width = 5*scale
height = 5*scale
marker = 2*width
line = marker/2.5
font = 6*width
fig,ax = plt.subplots(figsize=(width,height))
ax.axis([0,width,0,height])
ax.set_axis_off()
def dashedline(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
plt.plot([x0,x1],[y0,y1],'k--',linewidth=line)
def point(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker)
def text(x,y,text):
x = scale*x
y = scale*y
ax.text(x,y,text,
ha='center',va='center',
math_fontfamily='cm',
fontsize=font,color='black')
def arrow(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
ax.annotate('',xy=(x1,y1),xytext=(x0,y0),
arrowprops=dict(color='black',width=marker/7,headwidth=marker,headlength=marker))
arrow(1,1,1,4)
arrow(1,1,4,0.25)
arrow(1,1,3.5,2.25)
text(4.25,0.25,'$x$')
text(3.75,2.35,'$y$')
text(1,4.35,'$z$')
dashedline(1,1,3,1.25)
dashedline(3,1.25,3,3.5)
dashedline(1,1,3,3.5)
point(3,3.5)
text(2.6,0.9,'$\\varphi$')
text(2.7,3.6,'$r$')
text(1.25,1.8,'$\\theta$')
plt.show()
Finally, in spherical coordinates we want to separate:
$$\del^2\psi + k^2\psi = 0$$$$\frac{1}{r}\frac{\p^2}{\p r^2}(r\psi) + \frac{1}{r^2\sin\theta}\left[\frac{\p}{\p\theta} \left(\sin\theta\ps{\psi}{\theta}\right) + \frac{1}{\sin\theta} \pp{\psi}{\phi}\right] + k^2 \psi = 0$$Let's start with the radial part: $\psi = R(r)Y(\theta,\phi)$
$$\frac{1}{R}\frac{1}{r} \frac{d^2}{dr^2}(rR) + \frac{1}{r^2} \frac{1}{Y\sin\theta} \left[\frac{\p}{\p\theta}\left( \sin\theta \ps{Y}{\theta} \right) + \frac{1}{\sin\theta} \pp{Y}{\phi} \right] + k^2 = 0 $$$$\Rightarrow \frac{1}{Y\sin\theta} \left[ \frac{\p}{\p\theta}\left( \sin\theta \ps{Y}{\theta} \right) + \frac{1}{\sin\theta} \pp{Y}{\phi} \right] = -\lambda$$$$\frac{1}{R}\frac{1}{r} \frac{d^2}{dr^2}(rR) + k^2 - \frac{\lambda}{r^2} = 0$$for a constant $\lambda$. If $k^2 \ne 0$, substituting $r = \rho/k$ and then $R = S/\sqrt{\rho}$ gives Bessel's equation again:
$$\dd{S}{\rho} + \frac{1}{\rho}\ds{S}{\rho} + \left(1- \frac{\lambda+1/4}{\rho^2} \right)S = 0$$with the solution:
$$R = A \frac{1}{\sqrt{kr}} J_{\sqrt{\lambda+1/4}}(kr) + B \frac{1}{\sqrt{kr}} N_{\sqrt{\lambda+1/4}}(kr)$$If $k^2=0$, the radial equation simplifies to:
$$\frac{1}{r} \frac{d^2}{dr^2}(rR) - \frac{\lambda}{r^2}R = 0$$solved by:
$$R = Ar^{(-1+\sqrt{1+4\lambda})/2} + Br^{(-1-\sqrt{1+4\lambda})/2}$$Now separate the angular parts with $Y = \Theta(\theta)\Phi(\phi)$:
$$\frac{1}{\Theta} \frac{1}{\sin\theta} \frac{d}{d\theta}\left(\sin\theta \ds{\Theta}{\theta}\right) + \frac{1}{\sin^2\theta} \frac{1}{\Phi} \dd{\Phi}{\phi} + \lambda = 0$$$$\Rightarrow\frac{1}{\Phi}\dd{\Phi}{\phi} = -m^2$$$$\frac{1}{\sin\theta} \frac{d}{d\theta}\left(\sin\theta \ds{\Theta}{\theta} \right) + \left(\lambda - \frac{m^2}{\sin^2\theta}\right) \Theta = 0$$The solution of the $\phi$ equation is:
$$\Phi = Ae^{im\phi} + Be^{-im\phi}$$For the $\theta$ equation, substitute $x=\cos\theta$:
$$\dd{\Theta}{x} - \frac{2x}{1-x^2}\ds{\Theta}{x} + \frac{1}{1-x^2} \left[ \lambda - \frac{m^2}{1-x^2} \right] \Theta = 0$$This is Legendre's equation, solved by the Legendre functions:
$$\Theta = C\Theta_l^m(x) + DQ_l^m(x)$$with $l(l+1)=\lambda$.
References¶
- [Gershenfeld:1983] Gershenfeld, N. A., & Bilaniuk, O. M. (1983). APL And The Numerical Solution Of High-Order Linear Differential Equations. American Journal of Physics, 51(8), 743.
Problems¶
- Consider the motion of a damped, driven harmonic oscillator (such as a mass on a spring, a ball in a well, or a pendulum making small motions):
$$
m \ddot x + \gamma \dot x + k x = e^{i\omega t}
$$
- Under what conditions will the governing equations for small displacements of a particle around an arbitrary 1D potential minimum be simple undamped harmonic motion?
- Find the solution to the homogeneous equation, and comment on the possible cases. How does the amplitude depend on the frequency?
- Find a particular solution to the inhomogeneous problem by assuming a response at the driving frequency, and plot its magnitude and phase as a function of the driving frequency for $m=k=1, \gamma=0.1$.
- For a driven oscillator the Q or Quality factor is defined as the ratio of the center frequency to the width of the curve of the average energy (kinetic + potential) in the oscillator versus the driving frequency (the width is defined by the places where the curve falls to half its maximum value). For an undriven oscillator the $Q$ is defined to be the ratio of the energy in the oscillator to the energy lost per radian (one cycle is $2\pi$ radians). Show that these two definitions are equal, assuming that the damping is small. How long does it take the amplitude of a 100 Hz oscillator with a $Q$ of $10^9$ to decay by $1/e$?
- For an arbitrary potential minimum, work out the form of the lowest-order correction to simple undamped unforced harmonic motion.
- Use matrix diagonalization to find the normal modes of a system of two coupled harmonic oscillators.
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
scale = .6
width = 8*scale
height = 1.7*scale
line = width/3
marker = 2*width
font = 3*width
fig,ax = plt.subplots(figsize=(width,height))
ax.axis([0,width,0,height])
ax.set_axis_off()
def resistor(x,y,width):
x = scale*x
y = scale*y
width = scale*width
dx = width/16
dy = 2.5*dx
xplot = np.linspace(x-width/2,x+width/2,17)
yplot = y+np.array([0,0,0,dy,0,-dy,0,dy,0,-dy,0,dy,0,-dy,0,0,0])
plt.plot(xplot,yplot,'k-',linewidth=line)
def wire(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
plt.plot([x0,x1],[y0,y1],'k-',linewidth=line)
def point(x,y):
x = scale*x
y = scale*y
plt.plot(x,y,'ko',markersize=marker)
resistor(2,1.2,2)
point(3,1.2)
resistor(4,1.2,2)
point(5,1.2)
resistor(6,1.2,2)
def text(x,y,text):
x = scale*x
y = scale*y
ax.text(x,y,text,
ha='center',va='center',
math_fontfamily='cm',
fontsize=font,color='black')
text(2,1.9,'$k$')
text(3,1.7,'$m$')
text(3,0.1,'$y_1$')
text(4,1.9,'$k$')
text(5,1.7,'$m$')
text(5,0.1,'$y_2$')
text(6,1.9,'$k$')
def arrow(x0,y0,x1,y1):
x0 = scale*x0
x1 = scale*x1
y0 = scale*y0
y1 = scale*y1
ax.annotate('',xy=(x1,y1),xytext=(x0,y0),
arrowprops=dict(color='black',width=marker/7,headwidth=marker,headlength=marker))
arrow(3,0.5,4,0.5)
arrow(5,0.5,6,0.5)
def rectangle(x,y,w,h):
x = scale*x
y = scale*y
w = scale*w
h = scale*h
#rect = patches.Rectangle((x,y),w,h,facecolor='gray')
rect = patches.Rectangle((x,y),w,h,facecolor=(0.7,0.7,0.7))
ax.add_patch(rect)
rectangle(0.5,0.6,.5,1.8)
rectangle(7,0.6,.5,1.8)
plt.show()
- A common simple digital filter used for smoothing a signal is $$ y(k) = \alpha y(k-1) + (1-\alpha) x(k) $$ where $\alpha$ is a parameter that determines the response of the filter. Solve for $y(k)$ as a function of $x(k)$. What is the amplitude of the frequency response?
- Harder: Consider a round drumhead of radius $L$. For small displacements its motion is described by a linear wave equation. Find the frequencies of the six lowest oscillation modes, and plot the shape of the modes.
(c) Neil Gershenfeld 2/11/26