How can I classify the transmission system

Classification of signals

Deterministic and stochastic signals


Both deterministic and stochastic signals occur in every message system.

$ \ text {Definition:} $ A $ \ text {deterministic signal} $ is present if its time function $ x (t) $ can be given completely in analytical form.


Since the time function $ x (t) $ is known and clearly specifiable for all times $ t $, there is always a spectral function $ X (f) $ that can be calculated using the Fourier series or the Fourier transformation for these signals.

$ \ text {Definition:} $ One speaks of a $ \ text {stochastic signal} $ or of a $ \ text {random signal} $ if the signal course $ x (t) $ is not - or at least not completely - in mathematical terms Form is describable. Such a signal cannot be precisely predicted for the future.

Example of a deterministic signal (above)
and a stochastic signal (below)

$ \ text {Example 1:} $ The graphic shows the time courses of a deterministic and a stochastic signal:

  • Above is a periodic square-wave signal $ x_1 (t) $ with the period $ T_0 $ ⇒ deterministic signal,
  • below a Gaussian noise signal $ x_2 (t) $ with the mean value $ 2 \ \ rm V $ ⇒ stochastic signal.


For such a nondeterministic signal $ x_2 (t) $, therefore, no spectral function $ X_2 (f) $ can be specified, since the Fourier series / Fourier transformation presuppose the precise knowledge of the time function for all times $ t $.


Information-carrying signals are always of a stochastic nature. Their description as well as the definition of suitable parameters takes place in the book Stochastic Signal Theory.

But the deterministic signals are also of great importance for communications engineering. Examples for this are:

  • Test signals for the design of communication systems,
  • Carrier signals for frequency division multiplex systems, and
  • a pulse for sampling an analog signal or for time regeneration of a digital signal.

Causal and acausal signals


In communications engineering, one often counts on signals that are unlimited in time; the definition range of the signal then extends from $ t = - \ infty $ to $ t = + \ infty $.

In reality, however, there are no such signals, because each signal had to be switched on at some point. If you select the switch-on time $ t = 0 $ - arbitrarily but still sensibly - you arrive at the following classification:

$ \ text {Definitions:} $

  • A signal $ x (t) $ is called $ \ text {causal} $ if it does not exist for all times $ t <0 $ or is identical to zero.
  • If this condition is not met, there is a $ \ text {acausal} $ signal (or system).


In the present book “Signal Representation” mostly acausal signals and systems are considered. This has the following reasons:

  • Acausal signals (and systems) are mathematically easier to handle than causal ones. For example, the spectral function can be determined by means of Fourier transformation and one does not need extensive knowledge of the theory of functions, as is the case with the Laplace transformation.
  • Acausal signals and systems describe the situation completely and correctly if one ignores the problem of the switch-on process and is therefore only interested in the steady state.


The description of causal signals and systems with the help of the Laplace transformation follows in the book Linear time-invariant systems.

Causal and acausal system

$ \ text {Example 2:} $ You can see a causal transmission system in the graphic above:

  • If a step function $ x (t) $ is applied to its input, the output signal $ y (t) $ can only rise from zero to its maximum value from the point in time $ t = 0 $.
  • Otherwise the causal relationship that the effect cannot set in before the cause would not be fulfilled.


In the picture below, this causality is no longer given. As can be easily seen, in this example you can get from the acausal to the causal representation with an additional running time of one millisecond.

Energy-limited and power-limited signals


At this point, two important signal description quantities must first be introduced, namely the $ \ text {signal energy} $ and the $ \ text {signal power} $.

  • In terms of physics, energy corresponds to work and has, for example, the unit “Ws”.
  • The performance is defined as “work per time” and therefore has the unit “W”.


According to the elementary laws of electrical engineering, both quantities are dependent on the resistance $ R $. In order to eliminate this dependency, the resistance $ R = 1 \, \ Omega $ is often used as a basis in communications engineering. Then the following definitions apply:

$ \ text {Definition:} $ The $ \ text {energy} $ of the signal $ x (t) $ is to be calculated as follows:

$$ E_x = \ lim_ {T _ {\ rm M} \ to \ infty} \ int ^ {T _ {\ rm M} / 2} _ {- T _ {\ rm M} / 2} x ^ 2 (t) \ , {\ rm d} t. $$

$ \ text {Definition:} $ To calculate the (mean) $ \ text {power} $, before crossing the border, you have to divide by the time $ T _ {\ rm M} $:

$$ P_x = \ lim_ {T _ {\ rm M} \ to \ infty} \ frac {1} {T _ {\ rm M}} \ cdot \ int ^ {T _ {\ rm M} / 2} _ {- T_ {\ rm M} / 2} x ^ 2 (t) \, {\ rm d} t. $$

Here $ T _ {\ rm M} $ denotes the measurement duration assumed symmetrically with respect to the time origin $ (t = 0) $ during which the signal is observed. This time interval must generally be chosen to be very large; ideally, $ T _ {\ rm M} $ should approach infinity.


If $ x (t) $ denotes a voltage curve with the unit $ \ text {V} $, then according to the above equations

  • the signal energy has the unit $ \ text {V} ^ 2 \ text {s} $,
  • the signal power has the unit $ \ text {V} ^ 2 $.


This statement also means: In the above definitions, the reference resistance $ R = 1 \, \ Omega $ is implicitly based.

Energy-limited and power-limited signal

$ \ text {Example 3:} $ Now the energy and power of two exemplary signals are calculated.

The graphic above shows a square pulse $ x_1 (t) $ with amplitude $ A $ and duration $ T $.

  • The signal energy of this pulse is $ E_1 = A ^ 2 \ cdot T $.
  • For the signal power, division by $ T _ {\ rm M} $ and limit value formation $ (T _ {\ rm M} \ to \ infty) $ results in the value $ P_1 = 0 $.


For the cosine signal $ x_2 (t) $ with the amplitude $ A $ according to the diagram below, the following applies:

  • The signal power is equal to $ P_2 = A ^ 2/2 $ regardless of the frequency.
  • The signal energy $ E_2 $ (integral over the power for all times) is infinite.


With $ A = 4 \ {\ rm V} $ the power output is $ P_2 = 8 \ {\ rm V} ^ 2 $. With the resistance of $ R = 50 \, \, \ Omega $ this corresponds to the physical power $ {8} / {50} \, \, {\ rm V} \ hspace {-0.1cm} / {\ Omega} = 160 \, \, {\ rm mW} $.


According to this example, there are the following classification features:

$ \ text {Definition:} $ A signal $ x (t) $ with finite energy $ E_x $ and infinitely small power $ (P_x = 0) $ is called $ \ text {energy-limited} $.

  • Pulse-shaped signals such as the $ x_1 (t) $ signal in the above example are always energy-limited. In most cases, the signal values ​​differ from zero only for a finite period of time. In other words: such signals are often time-limited.
  • But signals that are unlimited in time can also have finite energy. In later chapters you will find further information on energy-limited and therefore aperiodic signals, to which, for example, the Gaussian pulse and the exponential pulse belong.

$ \ text {Definition:} $ A signal $ x (t) $ with finite power $ P_x $ and correspondingly infinitely large energy $ (E_x \ to \ infty) $ is called $ \ text {power-limited} $.

  • All power-limited signals are also infinitely extended in time. Examples of this are the DC signal and harmonic oscillations such as the cosine signal $ x_2 (t) $ in $ \ text {example 3} $, which are described in detail in the chapter Periodic Signals.
  • Most stochastic signals are also limited in power - see book Stochastic Signal Theory.

Continuous-value and discrete-value signals

$ \ text {Definitions:} $

  • A signal is called $ \ text {value-continuous} $ if the decisive signal parameter - for example the instantaneous value - can assume all values ​​of a continuum (for example an interval).
  • If, on the other hand, there are only a number of different values ​​possible for the signal parameter, the signal $ \ text {value discrete} $. The number $ M $ of possible values ​​is called the “number of levels” or the “range of values”.
  • Analog transmission systems always work with continuous-value signals.
  • With digital systems, on the other hand, most signals - but not all - are value-discrete.
Value-continuous and value-discrete signal

$ \ text {Example 4:} $ The picture above shows a section of a value-continuous signal $ x (t) $ in blue, which can assume values ​​between $ \ pm 8 \ \ rm V $.

  • The signal $ x _ {\ rm Q} (t) $, discretized to $ M = 8 $ quantization levels, with the possible signal values ​​$ \ pm 1 \ \ rm V $, $ \ pm 3 \ \ rm V $, can be seen in red color, $ \ pm 5 \ \ rm V $ and $ \ pm 7 \ \ rm V $.
  • For this signal $ x _ {\ rm Q} (t) $, the instantaneous value was considered to be the decisive signal parameter.
FSK signal - value-based and still binary

In the case of an FSK ("Frequency Shift Keying") system, on the other hand, the instantaneous frequency is the essential signal parameter.

This is why the signal $ s _ {\ rm FSK} (t) $ shown below is also called discrete-value with the number of stages $ M = 2 $ and the possible frequencies $ 1 \ \ rm kHz $ and $ 5 \ \ rm kHz $, although the instantaneous value is value-continuous.

Continuous and discrete time signals


For the signals considered so far, the signal parameter was defined at any point in time. One then speaks of a “time-continuous signal”.

$ \ text {Definition:} $ With a $ \ text {time-discrete signal} $, in contrast, the signal parameter is only defined at discrete points in time $ t_ \ nu $, whereby these points in time are usually chosen to be equidistant:

$$ t_ \ nu = \ nu \ cdot T _ {\ rm A}. $$

Since such a signal is generated, for example, by sampling a time-continuous signal, we denote $ T _ {\ rm A} $ as the $ \ text {sampling interval} $ and its reciprocal value $ f _ {\ rm A} = 1 / T _ {\ rm A} $ as the $ \ text {sampling frequency} $.

Time-continuous and time-discrete signal

$ \ text {Example 5:} $ The time-discrete signal $ x _ {\ rm A} (t) $ is obtained after sampling the time and value-continuous message signal $ x (t) $ shown above at a distance of $ T _ {\ rm A} $.

  • The time curve $ x _ {\ rm R} (t) $ sketched below differs from the real time-discrete representation $ x _ {\ rm A} (t) $ in that the infinitely narrow sample values ​​(mathematically describable with Dirac pulses) are represented by rectangular pulses of duration $ T _ {\ rm A} $ are replaced.
  • According to the above definition, such a signal can also be referred to as time-discrete.










The following also applies:

  • A time-discrete signal $ x (t) $ is completely determined by the sequence $ \ left \ langle x_ \ nu \ right \ rangle $ of its samples.
  • These sampled values ​​can be both value-continuous and value-discrete.
  • The mathematical description of time-discrete signals is given in the chapter Time-Discrete Signal Representation.

Analog signals and digital signals

Analog and digital signals

$ \ text {Example 6:} $ The following graphic shows an example of the following signal properties:

  • “Continuous value” and “discrete value”, as well as
  • "Time-continuous" and "Time-discrete".


The following stipulations also apply:

$ \ text {Definitions:} $

  • A $ \ text {analog signal} $ is both value and time continuous. Such signals continuously depict a continuous process.
  • Examples of this are unprocessed speech, music and image signals.

$ \ text {Definitions:} $

  • A $ \ text {digital signal} $, on the other hand, is always discrete in value and time and the message it contains consists of the symbols of a symbol pool.
  • For example, it can be a sampled and quantized (as well as coded in some form) voice, music or image signal, but also a data signal when a file is downloaded from a server on the Internet.


Depending on the number of stages, digital signals are also known by other names, for example

  • with $ M = 2 $: binary digital signal or $ \ text {binary signal} $,
  • with $ M = 3 $: ternary digital signal or $ \ text {ternary signal} $,
  • with $ M = 4 $: quaternary digital signal or $ \ text {quaternary signal} $.


The training video Analog and Digital Signals summarizes the classification features dealt with in this chapter in a compact way.

Exercises for the chapter


Exercise 1.2: Signal classification

Exercise 1.2Z: Pulse code modulation