\documentclass[12pt]{article}
\usepackage{graphicx}
%\usepackage{epsfig}
%\topmargin=+0.0in
%\oddsidemargin=0.25in
\textwidth=6.0in
%\textheight=8in
%\footskip=6ex
%\footheight=2ex
\begin{document}
\begin{center}
{\bf Notes on LRC circuits and Maxwell's Equations}
\end{center}
Last quarter, in Phy133, we covered electricity and magnetism. There
was not enough time to finish these topics, and this quarter we start where we
left off and complete the classical treatment of the electro-magnetic
interaction. We begin with a discussion of circuits which contain a
capacitor, resistor, and a significant amount of self-induction. Then we
will revisit the equations for the electric and magnetic fields and
add the final piece, due to Maxwell. As we will see, the missing term
added by Maxwell will unify electromagnetism and light. Besides
unifying different phenomena and our understanding of physics, Maxwell's term
lead the way to the development of wireless communication, and revolutionized
our world.
\bigskip
\centerline{\bf LRC Circuits}
\bigskip
Last quarter we covered circuits that contained batteries and resistors. We also
considered circuits with a capacitor plus resistor as well as resistive circuits that
has a large amount of self-inductance. The self-inductance was dominated by
a coiled element, i.e. an inductor. Now we will treat circuits that have
all three properties, capacitance, resistance and self-inductance.
We will use the same "physics" we discussed last quarter pertaining to circuits.
There are only two basic principles needed to analyze circuits.\\
\noindent 1. {\bf The sum of the currents going into a junction (of wires)
equals the sum of the currents leaving that junction}. Another way is
to say that the charge flowing into equals the charge flowing out of any
junction. This is essentially a statement that charge is conserved.\\
\noindent 1) The sum of the currents into a junction equals the sum of the
currents flowing out of the junction.\\
The next principle involves the sum of the voltage drops around a closed wire loop.
If the current is not changing, the sum is be zero. However, this is only correct if the
current is not changing in the circuit. If the current is changing, then
path integral of $\oint \vec{E} \cdot \vec{dl}$ is not zero, but rather
$- d \Phi_m / dt$. Thus, the Law involving voltage changes around a closed loop is:
\begin{equation}
\sum (Voltage \; drops) = - \oint \vec{E} \cdot \vec{dl} = {{d \Phi_m} \over {dt}}
\end{equation}
\noindent where $\Phi_m$ is the magnetic flux through the closed path of the loop.
The direction of adding the voltage drops determines the direction of positive
magnetic flux.
Last quarter we applied these two laws of physics to circuits containing
resistors and capacitors, as well as a circuit containing a resistor and inductor.
You should review what was covered. Now we consider a circuit that has
a capacitor and an inductor.\\
\bigskip
\noindent {\it L-C Resonance Circuit}\\
An important application is a circuit that has a large self-inductance
and a capacitor. We start with the simplest case. For the circuit to
have a large self-inductance, we add a coil (or solenoid) that has an
inductance $L$. The circuit contains only the capacitor connected
to the solenoid. We assume that the self-inductance of the
rest of the circuit is negligible compared to the solenoid, so that the net self-inductance of the
circuit is $L$. If the self-inductance of the circuit is $L$, then we have $\Phi_m = LI$.
The sum of the the voltage changes around the loop in the direction of "+" current becomes
\begin{equation}
\sum (Voltage \; drops) = + L {{dI} \over {dt}}
\end{equation}
\noindent where $I$ is the current in the circuit. Note that the sign
on the right side of the equation is "+" if the direction of voltage changes
is in the same direction as the current. It would be minus if the other
direction were chosen. Let the capacitor have a capacitance
of $C$. Let $\pm Q$ be the charge on the plates of the capacitor. Applying the
voltage equation to the voltage changes around the circuit gives:
\begin{equation}
V_c = L {{d I} \over {dt}}
\end{equation}
\noindent where $V_c$ is the voltage across the capacitor. To solve this
equation, we need to have another relationship between $V_c$ and $I$.
Expressing $V_c$ and $I$ in terms of the charge on the capacitor plates
will give us the connection we need. If $\pm Q$
are the charges on each plate of the capacitor, we have $V_c = Q/C$.
\noindent The current is the rate of change of the charge on the capacitor.
If we take $+I$ in a direction away from the positive plate of the capacitor,
then $I = - (dQ)/(dt)$. Substituting into the equation above gives:
\begin{eqnarray*}
V_c & = & L {{d I} \over {dt}}\\
{Q \over C} & = & -L {{d^2 Q} \over {dt^2}} \\
{{d^2 Q} \over {dt^2}} & = & - {Q \over {LC}}
\end{eqnarray*}
The above equation is a relatively simple differential equation.
The solution $Q(t)$ is a function whose second derivative is minus itself.
Functions with this property are sinusoidal functions. How do we include
the factor $1/(LC)$. We can guess the solution by remembering how the
chain rule works. You should verify that the function: $Q(t) = A sin(t/\sqrt{LC})$
as well as the function $Q(t) = B cos(t/\sqrt{LC})$ are solutions to the
equation above. Any linear combination of these two solutions is also a solution.
So the most general solution is
\begin{equation}
Q(t) = A sin({t \over \sqrt{LC}}) + B cos({t \over \sqrt{LC}})
\end{equation}
\noindent where $A$ and $B$ are constants that depend on the initial conditions.
Since the equation is a second order differential equation, there will be two
"integration constants". If the initial conditions are such that the initial
charge on the capacitor is $Q_0$ and that the current $I$ is initially
zero, then $B=Q_0$ and $A=0$. For these initial conditions
the solution is
\begin{equation}
Q(t) = Q_0 cos({t \over \sqrt{LC}})
\end{equation}
From a knowledge of $Q(t)$, the voltage across the capacitor and the current in
the circuit can be determined. The voltage across the capacitor is
\begin{equation}
V_c(t) = {Q_0 \over C} cos({t \over \sqrt{LC}}) = V_0 cos({t \over \sqrt{LC}})
\end{equation}
\noindent and the current in the circuit is
\begin{eqnarray*}
I(t) & = & - {{dQ} \over {dt}} \\
I(t) & = & {Q_0 \over \sqrt{LC}} sin({t \over \sqrt{LC}})
\end{eqnarray*}
These are interesting results. The voltage across the capacitor and the current in
the circuit oscillate back and forth. The period $T$ of this oscillation is given by
\begin{eqnarray*}
{T \over \sqrt{LC}} & = & 2 \pi \\
T & = & 2 \pi \sqrt{LC}
\end{eqnarray*}
\noindent The frequency of the oscillation is given by
$f = 1/T = 1/(2 \pi \sqrt{LC} )$. This frequency is the resonance frequency
of the circuit. Note that the quantity $1/\sqrt{LC}$ occurs in the argument of the
sin (or cos) function. It is convenient to define $\omega_0 \equiv 1/\sqrt{LC}$.
With this definition, we have for $I(t)$ and $V_c(t)$:
\begin{eqnarray*}
I(t) & = & {Q_0 \over \sqrt{LC}} sin(\omega_0 t)\\
V_c(t) & = & V_0 cos(\omega_0 t)
\end{eqnarray*}
\noindent where
\begin{equation}
\omega_0 \equiv {1 \over \sqrt{LC}}
\end{equation}
\noindent A couple of things to note.\\
\noindent 1) The quantity $\omega_0$ has units of $1/time$. It enters in the argument of the
sinusoidal functions in the same way as angular frequency would be for an
object moving in a circle. Nothing is rotating here, and there are no physical
angles. So when we refer to $\omega$ as the angular frequency, just think of
it as $2 \pi$ times the frequency.\\
\noindent 2) Both the current in the circuit and $V_c$ are sinusoidal functions,
but they are out of phase by $90^\circ$. One varies as $sin(\omega t)$ and the
other as $cos(\omega t)$.\\
\noindent 3) Initially $I=0$, and all the energy is in the electric field of the
capacitor, and the magnetic field in the solenoid is zero. Then at $t=T/2$, there
is no charge on the capacitor, the current is maximized, and all the energy is
in the magnetic field of the solenoid. The energy is being transfered back
and forth between the capacitor (electric field energy)
and the inductor (magnetic field energy). \\
In any real circuit there will be resistance. The energy of the $LC$ circuit
will be gradually dissipated by resistive elements. Next we consider what
happens if we include resistance in the circuit.\\
\bigskip
\noindent {\it R-L-C Resonance Circuit}\\
\bigskip
Consider now a series circuit that has a capacitor, capacitance $C$, a resistor,
resistance $R$, and a solenoid, self-inductance $L$. As before, we will assume
that the self-inductance of the whole circuit is approximately that of the
solenoid, $L$. Let's also neglect the resistance of the solenoid. Now, we can
apply the physics of circuits to the $R-L-C$-series circuit.
\begin{equation}
\sum (Voltage \; drops) = L {{dI} \over {dt}}
\end{equation}
\noindent if the path for the voltage changes is in the same direction
as the "+" direction of the current. As before, we take the "+" direction
of the current $I$ to
be away from the positive side of the capacitor. Adding up the changes in
voltage in the direction of "+" current we have:
\begin{equation}
V_c -RI = L {{dI} \over {dt}}
\end{equation}
\noindent $V_c$ is positive, since the path goes from the negative to the positive
side of the capacitor (a gain in voltage). The voltage drop across the resistor is
$-RI$ because the path is in the direction of the current (i.e. a drop in voltage).
As before, substituting $V_c=Q/C$ and $I=-(dQ/dt)$ gives
\begin{eqnarray*}
{Q \over C} + R {{dQ} \over {dt}} & = & -L{{d^2Q} \over {dt^2}} \\
L{{d^2Q} \over {dt^2}} + R {{dQ} \over {dt}} + {Q \over C} & = & 0 \\
{{d^2Q} \over {dt^2}} + {R \over L} {{dQ} \over {dt}} + {Q \over {LC}} & = & 0 \\
{{d^2Q} \over {dt^2}} + {R \over L} {{dQ} \over {dt}} + \omega_0^2 Q & = & 0
\end{eqnarray*}
\noindent where $\omega_0 \equiv 1/\sqrt{LC}$.
We can guess what the solution to the differential equation above should be.
With $R=0$, the solution is sinusoidal. If $L=0$, the solution is a decaying
exponential. So we can guess that the solution might be a sinusoidal function
multiplied by a decaying exponential.
There are different ways to solve this differential equation. I will
show two. The first way is rather complicated, but uses real quantities.
The second way is simplier, and uses complex numbers. Complex numbers
will be very useful in future calculations.\\
\noindent {\it Using real quantities}\\
Let's try to find a solution of the form $Q(t) = e^{-\gamma t}(A cos(\omega t) + B sin(\omega t))$,
where all quanties are real. The constants $A$ and $B$ should depend on the initital conditions.
That is, we need to find a solution what is valid for any $A$ and $B$. The constants
$\gamma$ and $\omega$ therefor need to exist such that $Q(t)$ is a solution for any $A$ and $B$. Differentiating
our expression for $Q(t)$ gives:
\begin{eqnarray*}
Q(t) & = & e^{-\gamma t}(A cos(\omega t) + B sin(\omega t)) \\
{{dQ} \over {dt}} & = & e^{-\gamma t} ((B\omega - A\gamma )cos(\omega t) -
(B\gamma + A\omega )sin(\omega t) ) \\
{{d^2Q} \over {dt^2}} & = & e^{-\gamma t}((A(\gamma^2-\omega^2)-2B\omega \gamma)cos(\omega t) +
(B(\gamma^2-\omega^2)+2A\omega \gamma)sin(\omega t))
\end{eqnarray*}
\noindent Now, substituting $Q$, $dQ/(dt)$, and $d^2Q/(dt^2)$ into the differential equation,
gives
\begin{eqnarray*}
e^{-\gamma t}cos(\omega t)
((A(\gamma^2 - \omega^2 - R\gamma /L + \omega_0^2) + B\omega (R/L -2\gamma) + & \\
e^{-\gamma t}sin(\omega t)
(A\omega (2\gamma -R/L) + B(\gamma^2 - \omega^2 -R\gamma /L + \omega_0^2)) & =0
\end{eqnarray*}
\noindent This equation must be true for all times $t$. Therefore the terms multiplying
the $sin(\omega t)$ and the $cos(\omega t)$ must each be equal to zero:
\begin{eqnarray*}
A(\gamma^2 - \omega^2 - R\gamma /L + \omega_0^2) + B\omega (R/L -2\gamma) & = & 0\\
A\omega (2\gamma -R/L) + B(\gamma^2 - \omega^2 -R\gamma /L + \omega_0^2) & = & 0
\end{eqnarray*}
\noindent The condition that these two equations to be valid for any $A$ and $B$ require
that:
\begin{eqnarray*}
2\gamma - R/L & = & 0 \\
\gamma & = & {R \over {2L}}
\end{eqnarray*}
\noindent and
\begin{eqnarray*}
\gamma^2 - \omega^2 - R\gamma /L + \omega_0^2 & = & 0 \\
{R^2 \over {4L^2}} - \omega^2 - {R^2 \over {2L^2}} +\omega_0^2 & = & 0 \\
\omega^2 & = & \omega_0^2 - {R^2 \over {4L^2}} \\
\omega & = & \sqrt{\omega_0^2 -(R/(2L))^2}
\end{eqnarray*}
\noindent So the general solution for the $LRC$ decaying circuit is
\begin{eqnarray*}
Q(t) & = & e^{-R/(2L)}(Acos(\sqrt{\omega_0^2 - (R/(2L))^2} t) +
Bsin(\sqrt{\omega_0^2 - (R/(2L))^2} t)) \\
Q(t) & = & A_0 e^{-\gamma t} cos (\omega t + \alpha)
\end{eqnarray*}
\noindent where $\gamma = R/(2L)$ and $\omega = \sqrt{\omega_0^2-\gamma^2}$. For
the last step, we have combined a sin plus cos into a cos plus an angle. In
terms of $A$ and $B$, the constant $A_0 = \sqrt{A^2 + B^2}$ and $tan(\alpha) = -B/A$:
\begin{eqnarray*}
A_0 cos(\omega t + \alpha) & = & A_0cos(\omega t) cos(\alpha) -A_0sin(\omega t)sin(\alpha) \\
& = & A_0cos(\omega t) ({A \over A_0}) - A_0sin(\omega t)({{-B} \over A_0}) \\
& = & Acos(\omega t) + Bsin(\omega t)
\end{eqnarray*}
\noindent $A$ and $B$ are two legs of a right triangle. The hypotenuse is equal to
$\sqrt{A^2 + B^2}=A_0$. The $cos(\alpha) = A/A_0$ and $sin(\alpha)=-B/A_0$.
The math using real quantities is somewhat complicated. Now let's do the same
thing with complex numbers.\\
\noindent {\it Using complex numbers}\\
The mathematical relationship that we use is "Euler's Equation":
\begin{equation}
e^{ix} = cos(x) + i sin(x)
\end{equation}
\noindent where $x$ is real and $i=\sqrt{-1}$. Guided by the reasoning we used
with real numbers, we look for a solution of the form $Q(t) = Re(ze^{\beta t})$,
where $z$ and $\beta$ are constants that are complex numbers. Note, the solution
$Q(t)$ is real since we are taking the real part (Re) of a complex number.
First, I'll show that the form we have chosen results in
the same form as with the real numbers. Since $z$ is complex, we can
write it as $z=A_0e^{i \alpha}$, where $A_0$ and $\alpha$ are real. Since $\beta$ is complex, we can
write it as $\beta = - \gamma + i \omega$, where $\gamma$ and $\omega$ are real. So, we have
\begin{eqnarray*}
Q(t) & = & Re(A_0e^{i\alpha}e^{(-\gamma + i \omega)t}) \\
& = & Re(A_0 e^{-\gamma t} e^{i(\omega t + \alpha)}) \\
& = & A_0 e^{-\gamma t} Re(e^{i(\omega t + \alpha)}) \\
& = & A_0 e^{-\gamma t} cos(\omega t + \alpha)
\end{eqnarray*}
\noindent which is of the same form as that using the real parameters. That is, $Re(\beta ) \rightarrow
-\gamma$, $Im(\beta ) \rightarrow \omega$. $z$ (i.e. $A_0$ and $\alpha$) is a complex constant that
depends on the initial conditions as before. We can now substitute the complex function
$ze^{\beta t}$ into the differential equation and find a solution. Since the real and imaginary parts
are independent, they will each satisfy the differential equation. Both the real and imaginary
parts will be solutions for $Q(t)$, and we will take the real part. Substituting $ze^{\beta t}$
into our differential equation:
\begin{eqnarray*}
z({{d^2e^{\beta t}} \over {dt^2}} + {R \over L} {{de^{\beta t}} \over {dt}} +
\omega_0^2 e^{\beta t})) & = & 0 \\
{{d^2e^{\beta t}} \over {dt^2}} + {R \over L} {{de^{\beta t}} \over {dt}} +
\omega_0^2 e^{\beta t} & = & 0
\end{eqnarray*}
\noindent The constant $z$ factors out of each term since it is a constant. Differentiating
the exponential functions is easy:
\begin{eqnarray*}
e^{\beta t} ( \beta^2 + {R \over L} \beta + \omega_0^2 ) & = & 0 \\
\beta^2 + {R \over L} \beta + \omega_0^2 & = & 0
\end{eqnarray*}
\noindent and we are left with a quadratic equation for $\beta$, whose solution is
\begin{eqnarray*}
\beta & = & {{-R/L \pm \sqrt{(R/L)^2 - 4 \omega_0^2}} \over 2} \\
& = & -{R \over {2L}} \pm \sqrt{({R \over {2L}})^2 - \omega_0^2}
\end{eqnarray*}
\noindent If the resistance is small such that $R/(2L) < \omega_0$, then the
argument in the square root is negative. Defining $\gamma = R/(2L)$ and
$\omega = \sqrt{\omega_0^2 - \gamma^2}$ we have
\begin{equation}
\beta = -\gamma \pm i \omega
\end{equation}
\noindent We take the real part of $ze^{\beta t}$ for Q(t):
\begin{eqnarray*}
Q(t) & = & Re(z e^{(-\gamma \pm i \omega)t}) \\
& = & Re(A_0e^{i \alpha} e^{(-\gamma \pm i \omega)t}) \\
& = & Re(A_0 e^{-\gamma t} e^{\pm i (\omega t + \alpha)}) \\
& = & A_0 e^{-\gamma t} Re(e^{\pm i (\omega t + \alpha)}) \\
& = & A_0 e^{-\gamma t} cos(\omega t + \alpha)
\end{eqnarray*}
\noindent which is the same result we obtained using only real quantities.
Using complex numbers greatly simplifies the solution. We will use
complex numbers a few more times this quarter to solve differential
equations and to add sinusoidal functions having the same frequency.
A final point mention is that only $\gamma$ and $\omega$ depend on
the circuit elements $R$, $L$, and $C$. $A_0$ and $\alpha$ depend
on the initial conditions.
Next we will consider an $R$, $L$, $C$ circuit that has a sinusoidal
voltage source.
\bigskip
\noindent {\it RL circuit with sinusoidal voltage source}\\
\bigskip
As a final example of $LRC$ series circuits, we will add a sinusoidal
voltage source to the circuit. Let's first start with a circuit
that has a self-inductance $L$ and a resistance $R$. The self-inductance
is dominated by a solenoid as before. Let the frequency of the
sinusoidal voltage source be $f_d = \omega_d /(2 \pi)$, and the maximum
voltage be $V_m$.
The same physics is true when there is a source of voltage
present: 1) the current is the same through each circuit element since they
are connected in series, and 2) the sum of the voltage changes around the
loop equals the change of magnetic flux. Since the voltage source is
sinusoidal, after any transient oscillations have damped out, the current
in {\it the circuit will also be sinusoidal with the same frequency as the
voltage source}. That is, the current through every element in the circuit
can be written as
\begin{equation}
I(t) = I_m cos(\omega_d t)
\end{equation}
\noindent Note that $I_m$ is the maximum amplitude that the current will
have. We could have also chosen $sin(\omega_d t)$ and the end results would
be the same. What we need to determine is the relationship between $V_m$ and
$I_m$, and the relative phase between the sinusoidal current and the sinusoidal
voltages. For this, we use the voltage sum law. Equating the voltage changes
around the loop to the change in magnetic flux through the loop gives:
\begin{eqnarray*}
V_d -IR & = & L {{dI} \over {dt}} \\
V_d & = & IR + L{{dI} \over {dt}}
\end{eqnarray*}
\noindent where the current $I(t)$ is the same through the resistor and
the solenoid. If $I(t)=I_m cos(\omega_d t)$, then the voltage source
satisfies
\begin{eqnarray*}
V_d & = & IR + L{{dI} \over {dt}} \\
V_d & = & I_m R cos(\omega_d t) - L\omega_d I_m sin(\omega_d t) \\
V_d & = & I_m ( R cos(\omega_d t) - L\omega_d sin(\omega_d t)
\end{eqnarray*}
\noindent We can simplify this expression by adding the sin and cos functions.
A nice property of sinusoidal functions having the same frequency is that
when they are added together the sum is a single sinusoidal function. We
demonstrate this property for our circuit by using the trig identity
$cos(\omega_d t + \phi) = cos(\omega_d t)cos(\phi)-sin(\omega t)sin(\phi)$.
\begin{eqnarray*}
V_d & = & I_m ( R cos(\omega_d t) - L\omega_d sin(\omega_d t)) \\
V_d & = & I_m \sqrt{R^2+(L\omega_d)^2} ({R \over {\sqrt{R^2+(L\omega_d)^2}}}
cos(\omega_d t) - {{L\omega_d} \over {\sqrt{R^2+(L\omega_d)^2}}} sin(\omega_d t))\\
V_d & = & I_m \sqrt{R^2+(L\omega_d)^2}( cos(\phi)cos(\omega_d t) - sin(\phi) sin(\omega_d t)) \\
V_d & = & I_m \sqrt{R^2+(L\omega_d)^2} \; cos(\omega_d t + \phi) \\
\end{eqnarray*}
\noindent The third line follows by noting that $R$ and $L\omega_d$ are two legs of
a right triangle. The hypotenuse of this right triangle is $\sqrt{R^2 + (L\omega_d)^2}$,
and $\phi$ is an angle in the triangle. Since the cos function varies between $\pm 1$,
the amplitude of the voltage source is $V_m = I_m \sqrt{R^2 + (L\omega_d)^2}$, and
the voltage of the source {\it leads} the current by a phase $\phi$, where
$tan(\phi) = L\omega_d /R$.\\
Now let's add a capacitor in series with the resistor and solenoid. We need to
determine what the voltage $V_c$ across the capacitor is when a sinusoidal current
$I_m cos(\omega_d t$ flows through it. We know $V_c = Q/C$ and $I=+dQ/(dt)$. Here
there is a $+$ sign in front of $dQ/(dt)$ since current is flowing into the capacitor.
Solving for $Q$:
\begin{eqnarray*}
{{dQ} \over {dt}} & = & I_m ;\ cos(\omega_d t) \\
Q & = & \int I_m \; cos(\omega_d t) \; dt \\
Q & = & {I_m \over \omega_d} sin(\omega_d t)
\end{eqnarray*}
\noindent The integrating constant will be the initial charge on the capacitor, which
we take to be zero. Since $V_c = Q/C$, we have
\begin{equation}
V_c = {I_m \over {\omega_d C}} sin(\omega_d t)
\end{equation}
Adding the voltage changes around the $RLC$ series circuit loop gives
\begin{eqnarray*}
V_d -IR - V_c & = & L {{dI} \over {dt}} \\
V_d & = & IR + L{{dI} \over {dt}} + V_c
\end{eqnarray*}
\noindent The current through each element is $I=I_m \; cos(\omega_d t)$, which yields
for the voltages:
\begin{eqnarray*}
V_d & = & I_m R cos(\omega_d t)-L\omega_d I_m sin(\omega_d t)+{I_m \over {\omega_d C}} sin(\omega_d t)\\
V_d & = & I_m R cos(\omega_d t)-I_m(L\omega_d-{1 \over {\omega_d C}}) sin(\omega_d t)
\end{eqnarray*}
\noindent Finally, we can combine the sin and cos terms into one sinusoidal function as
we did before with only the solenoid present.
\begin{equation}
V_d = I_m \sqrt{R^2+(L\omega_d - {1 \over {\omega_d C}})^2} \; cos(\omega_d t + \phi)
\end{equation}
\noindent where now
\begin{equation}
tan(\phi ) = {{L\omega_d - {1 \over {\omega_d C}}} \over R}
\end{equation}
Let's discuss our results.
\begin{enumerate}
\item The quantity $\sqrt{R^2+(L\omega_d-1/(\omega_d C))^2}$ plays the role of
resistance in our series AC sinusoidal circuit. This generalized resistance quantity
is called the {\it impedance} of the circuit and usually given the symbol $Z$.
\begin{equation}
Z \equiv \sqrt{R^2+(L\omega_d - {1 \over {\omega_d C}})^2}
\end{equation}
\noindent Remember, however, that $Z$ only has meaning for sinusoidal currents
and voltage sources. If $L=0$ and in the absence of a capacitor, $Z=R$.
Note that $V_m=I_m Z$.
\item The term $L\omega_d$ is called the inductive reactance, and usually
labeled as $X_L \equiv L\omega_d$. The inductive reactance has units of resistance
(Ohms) and represents the effective inductive resistance of the solenoid for
sinusoidal currents.
\item The term $1/(\omega_d C)$ is called the capacitive reactance, and usually
labeled as $X_C \equiv 1/(\omega_d C)$. The capacitive reactance has units
of resistance (Ohms) and represents the effective capacitive resistance of the
capacitor for sinusoidal currents.
\item With these definitions, $Z=\sqrt{R^2+(X_L-X_C)^2}$, and
$tan(\phi)=(X_L-X_C)/R$.
\item For high driving frequencies, $X_L=\omega_d L$ is the largest term. The circuit
is mainly inductive. For low driving frquencies, $X_C=1/(\omega_d C)$ is the largest
term, and the circuit is mainly capacitative.
\item If $X_L > X_C$ (an inductive "$L$" situation) the relative phase $\phi$
between the voltage and the current is positive:
voltage leads current. If $X_C > X_L$ (a capacitive "$C$" situation) the relative
phase $\phi$ between the voltage the current is negative: current leads voltage.
Now you know about "ELI the ICE man".
\item $Z$ becomes its smallest when $X_L=X_C$. In this case $Z=R$. The frequency
for which this occurs is called the resonant frequency. $X_L=X_C$, i.e. the
resonance condition, when the driving angular frequency is
$\omega_d=1/\sqrt{LC}$, or $f_d = 1/(2\pi \sqrt{LC})$. At this resonant frequency,
the impedance takes on its smallest value and one gets the most current for the least voltage.
\end{enumerate}
There is a nice geometric way to express the impedance using a "phasor diagram".
$R$ points along the "+x-axis", $X_L$ points along the "+y-axis", and $X_C$
points along the $-y-axis$. $Z$ and $\phi$ are found by adding the reactances
and resistance like vectors.
\begin{figure}
\includegraphics[width=14cm]{fig2341a.png}
\end{figure}
One can do the analysis using complex numbers, if there is time I will cover this in class.
For those interested, $Z$ as well as $V$, $V_m$, $I$ and $I_m$ will be complex numbers. The
current is $I=I_m e^{i\omega_m t}$, and the voltage is $V=V_m e^{i\omega t}$.
$R$ is real, but $X_L = i\omega_d L$ and $X_C = -i/(\omega_d C)$. For a series connection,
the complex impedance is $Z = R + i(\omega_d L - 1/(\omega_d C))$. The complex numbers
$V_m$, $I_m$, and $Z$ are related by
\begin{eqnarray*}
V & = & IZ \\
V_m e^{i\omega_d t} & = & I_me^{i\omega_d t} Z \\
V_m & = & I_m Z \\
V_m & = & I_m \sqrt{R^2 + (\omega_d L - {1 \over {\omega_d C}})^2} \; e^{i \phi}
\end{eqnarray*}
\noindent Taking the real parts of each side yields the same result we obtained using
real numbers. For circuits that are combinations of series and parallel connections,
using complex numbers makes the calculations much much easier.
\bigskip
\noindent {\it Power considerations for the series RLC circuit}\\
Energy will only be transfered into the resistive element in the circuit.
Last quarter we derived that the power $P$ transfered into a resistor with
resistance $R$ is $P = I^2 R$. A sinusoidal voltage produces a sinusoidal
current, so the power varies in time as a sinusoidal squared function.
If $I = I_m sin(\omega_m t)$, then the power is
\begin{equation}
P = I_m^2 R sin^2(\omega_m t)
\end{equation}
\noindent and varies in time. We can calculate the average power, $P_{ave}$ by averaging
$P$ over one period of oscillation:
\begin{equation}
P_{ave} = I_m^2 R ({1 \over T}) \int_0^T sin^2({{2\pi} \over T} t) dt
\end{equation}
\noindent We will show in lecture that the average of $sin^2$ over one cycle equals
$1/2$. So we have for the average power:
\begin{equation}
P_{ave} = {{I_m^2 R} \over 2}
\end{equation}
It is convenient to express the power in terms of the R.M.S. (Root Mean Square) value of the
current (or voltage). The R.M.S. value means the square Root of the average (Mean) value
of the Square of the function. For a sinusoidally varying function, the R.M.S. value equals
$1/2$ the value of the maximum. So,
\begin{eqnarray*}
I_{RMS} & = & {I_m \over \sqrt{2}} \\
V_{RMS} & = & {V_m \over \sqrt{2}}
\end{eqnarray*}
\noindent in terms of the R.M.S. values,
\begin{eqnarray*}
P_{ave} & = & I_{RMS}^2 R \\
& = & I_{RMS}{V_{RMS} \over Z}R \\
& = & I_{RMS} V_{RMS} cos(\phi)
\end{eqnarray*}
\noindent since $cos(\phi) = R/Z$.
\bigskip
\noindent {\it Summary}
\bigskip
In summary, there are three different types of circuit components. One type are resistive
elements, which have $V \propto I$ ($V=RI$). Another type are inductive elements, which have
$V \propto dI/(dt)$ ($V = L (dI/(dt))$). A third type are capacitive elements, which
have $I \propto dV/(dt)$ ($I = (1/C)(dV/(dt))$). {\it If the current through the
circuit element is $I=I_m \; sin(\omega_d t)$} then,
\begin{eqnarray*}
V(t) & = & R I_m \; sin(\omega_d t) \; (Resistive) \\
V(t) & = & L \omega_d \; cos(\omega_d t) \; (Inductive) \\
V(t) & = & - {1 \over {C \omega_d}} \; cos(\omega t) \; (Capacitative)
\end{eqnarray*}
\noindent In series, the voltages (and resistances) add like vectors (phasors)
because the sum of two sinusoidals that have the same frequency is itself a
sinusoidal with the same frequency.
\bigskip
\centerline{\bf Maxwell's Equations}
\bigskip
The equations for the electromagnetic fields that we have developed so far,
in Phy133, are best expessed in terms of line and surface integrals:
\begin{eqnarray*}
\oint \oint \vec{E} \cdot \vec{dA} & = & {{Q_{net}} \over \epsilon_0} \; \; (Coulomb)\\
\oint \oint \vec{B} \cdot \vec{dA} & = & 0 \\
\oint \vec{E} \cdot \vec{dr} & = & - {{d \Phi_B} \over {dt}} \; \; (Faraday)\\
\oint \vec{B} \cdot \vec{dr} & = & \mu_0 I_{net} \; \; (Ampere ) \\
\end{eqnarray*}
Maxwell realized that the equations above are inconsistent with charge conservation.
In particular, there is a problem with the equation refered to as Ampere's Law:
\begin{equation}
\oint \vec{B} \cdot \vec{dr} = \mu_0 I_{net}
\end{equation}
\noindent where $I_{net}$ is the net current that passes through the path
of the line integral $\oint \vec{B} \cdot \vec{dr}$.
We can demonstrate the problem by considering the magnetic field that is produced
by a charging parallel plate capacitor. Let the capacitor have circular plates
with a radius $R$, and a plate separation $d$. Let the wires that connect to the center
of the plates extend to $\pm \infty$. See the figure on the adjacent page.
Suppose the right plate has charge $+Q(t)$ and the left plate a charge of $-Q(t)$.
Suppose also that charge is flowing into the left plate and out of the right plate.
Suppose for a certain time period the current flowing into the plates is a constant,
with value $I$. This current will produce a magnetic field that will circulate the
wire. If $d<100$ KeV & Nuclear transitions\\
\end{tabular}
\bigskip
In the table above, $\lambda = c/f$ and $E=hf$. In the next course, Phy235,
you will learn about the energy of the radiation. We can directly measure
frequencies if they are below around $10^{10}$ Hz. We can measure wavelength from around
$10$ cm to around $1 \; nm$ using interference effects. Photon energies
can be measured if they are greater than a few electron volts.
Some things to note about our derivation so far:
\begin{enumerate}
\item The solution with $\vec{E} = E_y(x,t) \hat{j}$ and $\vec{B} = B_z(x,t) \hat{k}$ are
called "plane wave" solutions. The electric field vector is the same at every
point in the entire y-z plane. $\vec{E}$ only varies in space in the "x-direction".
Similarily, the magnetic field vector, $\vec{B}$, only varies in space in
the "x-direction".
\item Note that $\vec{E}(x,t)$ and $\vec{B}(x,t)$ are coupled. There is no solution
for electromagnetic radiation that only has an electric field vector, nor only
a magnetic field vector. To have a solution $E_y(x,t)$, one also needs $B_z(x,t)$.
The coupling is perpendicular. That is, a time changing $E_y$ produces a magnetic
field in the z-direction. A time changing $B_z$ produces an electric field in
the y-direction. The choice of the y-axis for $\vec{E}$ was arbitrary. Whatever
direction we would have chosen for $\vec{E}$, both the radiation direction and the
coupled $\vec{B}$ field would have been perpendicular to $\vec{E}$.
\item Both $E_y(x,t)$ and $B_z(x,t)$ must have the same space and time variation.
That is if $E_y(x,t) = E_0 g(x \pm ct)$, then $B_z(x,t) = B_0 g(x \pm ct)$. Since
$E_y$ and $B_z$ are coupled, $E_0$ and $B_0$ are related to each other. To
find the relationship, we carry out the derivatives:
\begin{eqnarray*}
{{dE_y} \over {dx}} & = & - {{d B_z} \over {dt}} \\
E_0 g' & = & -B_0 (\pm c) g' \\
E_0 & = & \mp c B_0 \\
\end{eqnarray*}
\noindent by the chain rule. So the maximum value of the electric field $E_y$
equals $c$ times the maximum value of $B_z$.
\item Electromagnetic radiation usually varies sinusoidally in time and space. So
a common form for $g(x \pm ct)$ is
\begin{eqnarray*}
E_y & = & E_0 sin(kx \pm \omega t) \\
B_z & = & B_0 sin(kx \pm \omega t)\\
\end{eqnarray*}
\noindent where $k=2 \pi / \lambda$, $\omega = 2 \pi f$, and $E_0 = c B_0$.
Here $f$ is the frequency of the radiation.
\item Note that for "$(kx- \omega t)$" the radiation travels in the +x direction
and $B_z = + E_y/c$. For "$(kx + \omega t)$" the radiation travels in the
-x direction and $B_z = - E_y$. Thus the direction of the radiation is always
in the $\vec{E} \times \vec{B}$ direction.
\item Experimental evidence for electromagnetic radiation was performed by Hertz in
1887. He produced the radiation by opening and closing a circuit with a large coil.
He detected the radiation using another coil with a gap. He was able to verify
many of the properties of the radiation predicted by Maxwell's equations.
\item As we showed in our last course (Phy133), if we choose a different unit for charge,
both $\epsilon_0$ and $\mu_0$ each change, but the product $\epsilon_0 \mu_0$ does not.
The product $\epsilon_0 \mu_0$ has units of $time^2/length^2$, and is independent of our
choice of units for charge.
\end{enumerate}
We now consider the energy and momentum properties of the electromagnetic radiation,
and its polarization.
\bigskip
\noindent {\it Energy of electromagnetic radiation}
\bigskip
Using the classical picture of electromagnetic radiation, we can find
an expression for the energy density of the radiation. Last quarter we
showed that the energy density of the electric field is given by:
\begin{equation}
U_E = {{\epsilon_0 E^2} \over 2}
\end{equation}
\noindent We also showed that the energy density of the magnetic field is
given by:
\begin{equation}
U_B = {B^2 \over {2 \mu_0}}
\end{equation}
\noindent So the complete energy density of electromagnetic radiation is
\begin{eqnarray*}
U & = & U_E + U_B \\
& = & {{\epsilon_0 E^2} \over 2} + {B^2 \over {2 \mu_0}}
\end{eqnarray*}
\noindent Since $|B| = |E|/c$, we can combine the two terms. For the rest of
the discussion, lets take the radiation to have sinusoidal space and time dependence,
which is the best basis to work in. That is, $\vec{E} = E_0 sin(kx-\omega t) \hat{j}$
and $\vec{B} = B_0 sin(kx-\omega t) \hat{z}$, where $B_0 = E_0/c$. In this case
\begin{eqnarray*}
U(t) & = & ({{\epsilon_0 E_0^2} \over 2} + {B_0^2 \over {2 \mu_0}}) sin^2(kx-\omega t)\\
U(t) & = & ({{\epsilon_0 E_0^2} \over 2} + {E_0^2 \over {2 \mu_0 c^2}}) sin^2(kx-\omega t)\\
& = & \epsilon_0 E_0^2 sin^2(kx-\omega t)
\end{eqnarray*}
\noindent where we have used $c^2 = 1/(\epsilon_0 \mu_0)$. As we can see, the energy
density of the electromagnetic field varies sinusoidally in space and time. It is
most convenient to consider the average energy density. That is, the energy density
averaged over time for a fixed location in space. We showed before that the average
of $sin^2(\omega t + \theta)$ over one complete cycle is $1/2$. Therefore, the time
averaged energy density is
\begin{equation}
U_{ave} = {{\epsilon_0 E_0^2} \over 2}
\end{equation}
\noindent $U_{ave}$ is usually expressed in terms of $E_0$, but we could also have
expressed $U_{ave}$ as $U_{ave}= B_0^2/(2 \mu_0)$. Note that $U_E$, $\epsilon_0 E_0^2/2$, and
$U_B$, $B_0^2/(2 \mu_0)$ {\it have the the same magnitude} since
$B_0^2=E_0^2/c^2$ and $c^2=1/(\epsilon_0 \mu_0)$.
A useful quantity to consider is the energy of the EM radiation per area per unit time.
This quantity is refered to as the intensity of the radiation. Since the radiation
travels at the speed $c$, the energy passing through an area $A$ in a time $\Delta t$
is
\begin{equation}
energy = U A (c \Delta t)
\end{equation}
\noindent since $U$ is the energy per volume and a volume of $A(c \Delta t)$ passes
through the area in a time $\Delta t$. So,
\begin{eqnarray*}
I & = & {{energy} \over {A (\Delta t)}} \\
& = & cU \\
& = & c \epsilon_0 E_0^2 sin^2(kx-\omega t)
\end{eqnarray*}
We can express the expression for $I$ in terms of the magnetic field
\begin{eqnarray*}
I & = & c \epsilon_0 E_0 (cB_0) sin^2(kx-\omega t) \\
& = & c^2 \epsilon_0 E_0 B_0 sin^2(kx- \omega t) \\
& = & {{E_0B_0} \over \mu_0} sin^2(kx - \omega t)
\end{eqnarray*}
\noindent Since the wave travels in a direction perpendicular to both $\vec{E}$
and $\vec{B}$ we can define an "intensity vector" with a magnitude of
$I = EB/\mu_0$ in the direction of the radiation as:
\begin{equation}
\vec{S} \equiv {1 \over \mu_0} \vec{E} \times \vec{B}
\end{equation}
\noindent This vector was first investigated by Poynting, and is called "Poynting's Vector".
Its magnitude equals the energy/area/time (intensity), and it poynts in the direction of
the energy flow.
\begin{figure}
\includegraphics[width=14cm]{fig2341c.png}
\end{figure}
The Poynting vector equals the electromagnetic energy that is transported per area, per
time. Although we showed this property here for electromagnetic radiation, it can
be shown to be always valid. In your next EM course, Phy314, you will show, using
vector calculus that the rate at which electromagnetic work $W$ is done within a volume
equals the Poynting vector integrated around the surface.
\begin{equation}
{{dW} \over {dt}} = \int \vec{E} \cdot \vec{J} dV = -{d \over {dt}}
\int {1 \over 2}(\epsilon_0E^2+B^2/\mu_0)dV -
\oint {{\vec{E} \times \vec{B}} \over \mu_0} \cdot \vec{dA}
\end{equation}
\noindent We leave this proof for your next course on electromagnetism, but mention
it here so you will appreciate the significance of Poynting's Vector. We will do
examples in class.
A distinguishing feature derived from Maxwell's equations is that {\it the energy
of electromagnetic radiation (i.e. intensity) is
proportional to the electric field squared, $E^2$}. That is, brighter light
will have a larger electric field. The same relationship between wave amplitude
and energy is true for sound waves, water waves, and waves in general that
have a medium, i.e. mechanical waves. As you will see next quarter, the photoelectric
effect cannot be understood from the classical model of electromagnetic radiation
presented here.
\bigskip
\noindent {\it Polarization}
\bigskip
In the solution we just derived for electromagnetic radiation, we chose the electric
field to point in the $\hat{j}$ direction. If we would have chosen $\vec{E}$ to point
in the $\hat{k}$ direction, then $\vec{B}$ would have been in the $-\hat{j}$ direction and
the wave would still have propagated in the +x-direction. The direction of propagation
of the radiation is in the $\vec{E} \times \vec{B}$ direction. So, $\vec{E}$
can point anywhere in the y-z plane and the radiation can still travel in the
+x direction.
If $\vec{E}$ points in either the $+\hat{j}$ or $-\hat{j}$ direction, the radiation is
said to be linearly polarized in the "y-direction". If $\vec{E}$ points in either
the $+\hat{z}$ or $-\hat{z}$ direction, the radiation is said to be linearly polarized
in the "z-direction". Any superposition of these two solutions is also
linearly polarized in the direction of the $\vec{E}$ field. There are two linear
polarization states for electromagnetic radiation. For example, for radiation
that propagates in the $+x$ direction, and is linearly polarized at an
angle $\theta$ with respect to the y-axis, the electric field is
\begin{equation}
\vec{E}(x,t) = E_0 (cos(\theta ) \hat{j} + sin(\theta )\hat{k}) sin(kx-\omega t)
\end{equation}
\noindent where $E_0$ is the magnitude of the electric field. The corresponding
magnetic field is
\begin{equation}
\vec{B}(x,t) = B_0 (cos(\theta ) \hat{k} - sin(\theta )\hat{j}) sin(kx-\omega t)
\end{equation}
\noindent where $B_0 = E_0/c$. Another interesting combination of the two polarization
states is the following:
\begin{equation}
\vec{E}(x,t) = E_0 \hat{j} cos(kx-\omega t ) + E_0 \hat{k} sin(kx-\omega t)
\end{equation}
\noindent with corresponding magnetic field
\begin{equation}
\vec{B}(x,t) = B_0 \hat{k} cos(kx-\omega t ) - B_0 \hat{j} sin(kx-\omega t)
\end{equation}
\noindent where $B_0 = E_0/c$. For this combination, the electric field vector
rotates (via the right hand) in space and time. This type of polarization is termed
right-handed "circularly polarized" radiation. For left-handed circularly polarized
light, the electric field and magnetic fields are
\begin{eqnarray*}
\vec{E}(x,t) & = & E_0 \hat{j} cos(kx-\omega t ) - E_0 \hat{k} sin(kx-\omega t) \\
\vec{B}(x,t) & = & B_0 \hat{k} cos(kx-\omega t ) + B_0 \hat{j} sin(kx-\omega t)
\end{eqnarray*}
\noindent One could also have eliptically polarized radiation. There are two
independent polarization states for electromagnetic radiation. One can express
the polarization in terms of
two linearly polarization states, in terms of two circularly polarization
states, or in terms of any two independent basis states.
Light from an incandescent source, i.e. light bulb, has equal amount of both polarizations,
and is "unpolarized". There exist polaroids that allow only one type of polarization
state pass through. A linear polaroid will have a polarization axis. Radiation
that is polarized along the polarization axis will pass through, and radiation that
is polarized perpendicular to the polarization axis will not.
If unpolarized radiation "hits" a linear polaroid, then the intensity is reduced by a factor of
$1/2$. The polarization of the transmitted radiation will be along the axis of
the polaroid.
Consider linearly polarized radiation that "hits" a linear polaroid. Let $\theta$
be the angle between the direction of the polarization of the radiation and the
axis of the polaroid. Then, the electric field that passes through the
polaroid will be reduced by a factor
of $\cos(\theta )$ after passing through the polaroid. The intensity of the
radiation will be decreased by a factor of $cos^2(\theta )$. The polarization
of the transmitted radiation will be along the axis of the polaroid.
We will do some nice examples in lecture with polaroids.
This concludes the first third of our course. In the next third we will continue with
our investigation of electromagnetic radiation, in particular light. The main topics
will be geometric and physical optics.
\end{document}