5.3. Sampling

So far CT and DT signals are treated more or less independently of each other. In most practical applications of discrete time signal processing the signal \(x[t]\) finds its origin in some continuous time signal \(x(t)\).

The most often used way to make a DT signal out of a CT signal is called sampling. We assume the sampling to be equidistant in time. In that case sampling reduces a CT signal to a series of values

\[x[n] = x(n T_s)\]

where \(T_s\) is the sampling period. The frequency \(f_s=1/T_s\) is called the sampling frequency.

It should be noted that sampling as defined mathematically above is what is ideally done. In practice every measurement needs a probe of finite spatial and temporal extend. In these notes we simply assume that ideal sampling is close enough to reality.

The process to reconstruct the signal \(x(t)\) from its samples \(x[n]\) is called interpolation. Remarkably as it may seem it is possible to reconstruct \(x(t)\) exactly in case the original signal does not contain frequencies greater then or equal to half the sample frequency. The proof of this sampling theorem is the main subject in this chapter.