Correlation signal. Correlation analysis of deterministic signals. Cross correlation functions of signals

In the early stages of the development of radio engineering, the question of choosing the best signals for certain specific applications was not very pressing. This was due, on the one hand, to the relatively simple structure of the transmitted messages ( telegraph parcels, radio broadcasting); on the other hand, the practical implementation of signals of complex shapes in combination with equipment for their coding, modulation and inverse conversion the message turned out to be difficult to implement.

Currently, the situation has changed radically. In modern radio-electronic systems, the choice of signals is dictated primarily not by the technical convenience of their generation, conversion and reception, but by the possibility optimal solution tasks provided for when designing the system. To understand how the need for signals with specially selected properties arises, consider the following example.

Comparison of time-shifted signals.

Let's turn to the simplified idea of ​​​​the operation of a pulse radar designed to measure the distance to a song. Here, information about the measurement object is contained in the value - the time delay between the probing and received signals. The shapes of the probing and received signals are the same for any delay.

The block diagram of a radar signal processing device intended for range measurement may look as shown in Fig. 3.3.

The system consists of a set of elements that delay the “reference” transmitted signal for some fixed periods of time

Rice. 3.3. Device for measuring signal delay time

The delayed signals, together with the received signal, are fed to comparison devices, which operate in accordance with the principle: the output signal appears only if both input oscillations are “copies” of each other. Knowing the number of the channel in which the specified event occurs, you can measure the delay, and therefore the range to the target.

Such a device will work the more accurately, the more the signal and its “copy”, shifted in time, differ from each other.

Thus, we have gained a qualitative “idea” of what signals can be considered “good” for a given application.

Let us move on to the exact mathematical formulation of the problem posed and show that this range of issues is directly related to the theory of energy spectra of signals.

Autocorrelation function of the signal.

To quantify the degree of difference between a signal and its time-shifted copy, it is customary to introduce an autocorrelation function (ACF) of the signal equal to the scalar product of the signal and the copy:

In what follows, we will assume that the signal under study has a pulsed character localized in time, so that an integral of the form (3.15) certainly exists.

It is immediately clear that when the autocorrelation function becomes equal to the signal energy:

Among the simplest properties of an ACF is its parity:

Indeed, if we make a change of variables in the integral (3.15), then

Finally, an important property of the autocorrelation function is the following: for any value of the time shift, the ACF modulus does not exceed the signal energy:

This fact directly follows from the Cauchy-Bunyakovsky inequality (see Chapter 1):

So, the ACF is represented by a symmetrical curve with a central maximum, which is always positive. Moreover, depending on the type of signal, the autocorrelation function can have either a monotonically decreasing or oscillating character.

Example 3.3. Find the ACF of a rectangular video pulse.

In Fig. 3.4a shows a rectangular video pulse with amplitude U and duration. Its “copy” is also shown here, shifted in time towards the delay by . Integral (3.15) is calculated in this case simply on the basis of a graphical construction. Indeed, the product and and is nonzero only within the time interval when signals overlap. From Fig. 3.4, it is clear that this time interval is equal if the shift does not exceed the pulse duration. Thus, for the signal under consideration

The graph of such a function is the triangle shown in Fig. 3.4, b. The width of the base of the triangle is twice the duration of the pulse.

Rice. 3.4. Finding the ACF of a rectangular video pulse

Example 3.4. Find ACF rectangular radio pulse.

We will consider a radio signal of the form

Knowing in advance that the ACF is even, we calculate the integral (3.15), setting . At the same time

where we easily get

Naturally, when the value becomes equal to the energy of this pulse (see example 1.9). Formula (3.21) describes the ACF of a rectangular radio pulse for all shifts lying within If the absolute value of the shift exceeds the pulse duration, then the autocorrelation function will identically vanish.

Example 3.5. Determine the ACF of a sequence of rectangular video pulses.

In radar, signals are widely used, which are packets of pulses of the same shape, following each other at the same time interval. To detect such a burst, as well as to measure its parameters, for example, its position in time, devices are created that implement hardware algorithms for calculating the ACF.

Rice. 3.5. ACF of a pack of three identical video pulses: a - pack of pulses; b - ACF graph

In Fig. 3.5c shows a pack consisting of three identical rectangular video pulses. Its autocorrelation function is also presented here, calculated using formula (3.15) (Fig. 3.5, b).

It is clearly seen that the maximum ACF is achieved at However, if the delay is a multiple of the sequence period (at in our case), side lobes of the ACF are observed, comparable in height to the main lobe. Therefore, we can talk about a certain imperfection of the correlation structure of this signal.

Autocorrelation function of an infinitely extended signal.

If it is necessary to consider periodic sequences of unlimited duration in time, then the approach to studying the correlation properties of signals must be somewhat modified.

We will assume that such a sequence is obtained from some time-localized, i.e., pulsed signal, when the duration of the latter tends to infinity. In order to avoid divergence of the resulting expressions, we define the ionic ACF as the average value of the scalar product of the signal and its copy:

With this approach, the autocorrelation function becomes equal to the average mutual power of these two signals.

For example, if you want to find the ACF for a cosine wave that is unlimited in time, you can use formula (3.21) obtained for a radio pulse of duration and then go to the limit when taking into account definition (3.22). As a result we get

This ACF is itself a periodic function; its value at is equal to

Relationship between the energy spectrum of a signal and its autocorrelation function.

When studying the material in this chapter, the reader may think that the methods of correlation analysis act as some special techniques that have no connection with the principles of spectral decompositions. However, this is not true. It is easy to show that there is a close connection between the ACF and the energy spectrum of the signal.

Indeed, in accordance with formula (3.15), the ACF is a scalar product: Here the symbol denotes a time-shifted copy of the signal and ,

Turning to the generalized Rayleigh formula (2.42), we can write the equality

Spectral density of time-shifted signal

Thus, we come to the result:

The square of the spectral density modulus, as is known, represents the energy spectrum of the signal. So, the energy spectrum and the autocorrelation function are related by the Fourier transform:

It is clear that there is also an inverse relationship:

These results are fundamentally important for two reasons. Firstly, it turns out to be possible to evaluate the correlation properties of signals based on the distribution of their energy over the spectrum. The wider the frequency band of the signal, the narrower the main lobe of the autocorrelation function and the more perfect the signal in terms of the possibility of accurately measuring the moment of its beginning.

Secondly, formulas (3.24) and (3.26) indicate the way to experimentally determine the energy spectrum. It is often more convenient to first obtain the autocorrelation function, and then, using the Fourier transform, find the energy spectrum of the signal. This technique has become widespread when studying the properties of signals using high-speed computers in real time.

The relation sovtk It follows that the correlation interval

turns out to be smaller the higher the top cutoff frequency signal spectrum.

Restrictions imposed on the form of the autocorrelation function of the signal.

The found connection between the autocorrelation function and the energy spectrum makes it possible to establish an interesting and, at first glance, non-obvious criterion for the existence of a signal with given correlation properties. The fact is that the energy spectrum of any signal, by definition, must be positive [see. formula (3.25)]. This condition will not be satisfied for any choice of ACF. For example, if we take

and calculate the corresponding Fourier transform, then

This alternating function cannot represent the energy spectrum of any signal.

The point of spectral analysis of signals is to study how a signal can be represented as a sum (or integral) of simple harmonic oscillations and how the shape of the signal determines the structure of the frequency distribution of the amplitudes and phases of these oscillations. In contrast, the task of signal correlation analysis is to determine the degree of similarity and difference between signals or time-shifted copies of the same signal. The introduction of the measure opens the way to implementation quantitative measurements degree of similarity of signals. It will be shown that there is a certain relationship between the spectral and correlation characteristics of signals.

3.1 Autocorrelation function (ACF)

The autocorrelation function of a signal with finite energy is the value of the integral of the product of two copies of this signal, shifted relative to each other by a time τ, considered as a function of this time shift τ:

If the signal is defined on a finite time interval , then its ACF is found as:

,

Where
- overlap interval of shifted copies of the signal.

It is believed that the greater the value of the autocorrelation function
at a given value , the more two copies of the signal are shifted by a period of time , similar to each other. Therefore the correlation function
and is a measure of similarity for shifted copies of the signal.

The similarity measure introduced in this way for signals that have the form of random oscillations around a zero value has the following characteristic properties.

If shifted copies of the signal oscillate approximately in time with each other, then this is a sign of their similarity and the ACF takes on large positive values ​​(large positive correlation). If the copies oscillate almost in antiphase, the ACF takes on large negative values ​​(anti-similarity of signal copies, large negative correlation).

The maximum ACF is achieved when the copies coincide, that is, in the absence of a shift. Zero ACF values ​​are achieved at shifts at which neither similarity nor anti-similarity of signal copies is noticeable (zero correlation, o no correlation).

Figure 3.1 shows a fragment of the implementation of a certain signal over a time interval from 0 to 1 s. The signal oscillates randomly around zero. Since the signal's existence interval is finite, its energy is also finite. Its ACF can be calculated according to the equation:

.

The autocorrelation function of the signal, calculated in MathCad in accordance with this equation, is presented in Fig. 3.2. The correlation function shows not only that the signal is similar to itself (shift τ = 0), but also that copies of the signal shifted relative to each other by approximately 0.063 s (lateral maximum of the autocorrelation function) also have some similarity. In contrast, copies of the signal shifted by 0.032 s should be anti-similar to each other, that is, in some sense opposite to each other.

Figure 33 shows pairs of these two copies. From the figure you can see what is meant by similarity and antisimilarity of signal copies.

The correlation function has the following properties:

1. At τ = 0, the autocorrelation function takes on the largest value equal to the signal energy

2. The autocorrelation function is an even function of the time shift
.

3. As τ increases, the autocorrelation function decreases to zero

4. If the signal does not contain discontinuities of the type δ - functions, then
- continuous function.

5. If the signal is an electrical voltage, then the correlation function has the dimension
.

For periodic signals in the definition of the autocorrelation function, the same integral is further divided by the signal repetition period:

.

The introduced correlation function has the following properties:


For example, let's calculate the correlation function of a harmonic oscillation:

Using a series of trigonometric transformations, we finally obtain:

Thus, the autocorrelation function of a harmonic oscillation is a cosine wave with the same period of change as the signal itself. With shifts that are multiples of the oscillation period, the harmonic is converted into itself and the ACF takes on the largest values, equal to half the square of the amplitude. Time shifts that are multiples of half the oscillation period are equivalent to a phase shift by an angle
, in this case the sign of the oscillations changes, and the ACF takes on a minimum value, negative and equal to half the square of the amplitude. Shifts that are multiples of a quarter of a period transform, for example, a sinusoidal oscillation into a cosine oscillation and vice versa. In this case, the ACF goes to zero. Such signals, which are in quadrature relative to each other, from the point of view of the autocorrelation function turn out to be completely different from each other.

It is important that the expression for the correlation function of the signal does not include its initial phase. Phase information is lost. This means that the signal itself cannot be reconstructed from the correlation function of the signal. Display
as opposed to mapping
is not one-to-one.

If by a signal generation mechanism we mean a certain demiurge who creates a signal according to his chosen correlation function, then he could create a whole set of signals (an ensemble of signals) that actually have the same correlation function, but differ from each other in phase relationships.

    the act of a signal manifesting its free will, independent of the will of the creator (the emergence of individual implementations of some random process),

    the result of extraneous violence against the signal (introduction into the signal of measurement information obtained during measurements of any physical quantity).

The situation is similar with any periodic signal. If a periodic signal with a main period T has an amplitude spectrum
and phase spectrum
, then the correlation function of the signal takes the following form:

.

Already in these examples there is some connection between the correlation function and the spectral properties of the signal. These relationships will be discussed in more detail later.

Did you know What is a thought experiment, gedanken experiment?
This is a non-existent practice, an otherworldly experience, an imagination of something that does not actually exist. Thought experiments are like waking dreams. They give birth to monsters. Unlike a physical experiment, which is an experimental test of hypotheses, a “thought experiment” magically replaces experimental testing with desired conclusions that have not been tested in practice, manipulating logical constructions that actually violate logic itself by using unproven premises as proven ones, that is, by substitution. Thus, the main task of the applicants of “thought experiments” is to deceive the listener or reader by replacing a real physical experiment with its “doll” - fictitious reasoning on parole without the physical verification itself.
Filling physics with imaginary, “thought experiments” has led to the emergence of an absurd, surreal, confused picture of the world. A real researcher must distinguish such “candy wrappers” from real values.

Relativists and positivists argue that “thought experiments” are a very useful tool for testing theories (also arising in our minds) for consistency. In this they deceive people, since any verification can only be carried out by a source independent of the object of verification. The applicant of the hypothesis himself cannot be a test of his own statement, since the reason for this statement itself is the absence of contradictions in the statement visible to the applicant.

We see this in the example of SRT and GTR, which have turned into a kind of religion that controls science and public opinion. No amount of facts that contradict them can overcome Einstein’s formula: “If a fact does not correspond to the theory, change the fact” (In another version, “Does the fact not correspond to the theory? - So much the worse for the fact”).

The maximum that a “thought experiment” can claim is only the internal consistency of the hypothesis within the framework of the applicant’s own, often by no means true, logic. This does not check compliance with practice. Real verification can only take place in an actual physical experiment.

An experiment is an experiment because it is not a refinement of thought, but a test of thought. A thought that is self-consistent cannot verify itself. This was proven by Kurt Gödel.

From a physical point of view, the correlation function characterizes the relationship or interdependence of two instantaneous values ​​of one or two various signals at times and . In the first case, the correlation function is often called autocorrelation, and in the second - cross-correlation. The correlation functions of deterministic processes depend only on .

If signals and are given, then correlation functions determined by the following expressions:

- cross-correlation function; (2.66)

- autocorrelation function. (2.67)

If and are two periodic signals with the same period T, then it is obvious that their correlation function is also periodic with a period T and therefore it can be expanded in a Fourier series.

Indeed, if we expand the signal in expression (2.66) into a Fourier series, we obtain

(2.68)

where and are complex amplitudes n th harmonic of the signals and, accordingly, is the coefficient complex conjugate with. The expansion coefficients of the cross-correlation function can be found as the coefficients of the Fourier series

. (2.69)

The frequency expansion of the autocorrelation function can be easily obtained from formulas (2.68) and (2.69), putting , Then

. (2.70)

And since and therefore

, (2.71)

then the autocorrelation function is even and therefore

. (2.72)

The parity of the autocorrelation function allows it to be expanded into a trigonometric Fourier series in cosines

In the special case, for , we obtain:

.

Thus, the autocorrelation function at represents the total average power of the periodic signal, equal to the sum of the average powers of all harmonics.

Frequency representation of pulse signals

In the previous discussion, it was assumed that the signals are continuous, but in automatic information processing, pulsed signals are often used, as well as the conversion of continuous signals into pulsed ones. This requires consideration of issues of frequency representation of pulse signals.

Let's consider the model of converting a continuous signal into a pulsed form, presented in Fig. 2.6a.



Let a continuous signal arrive at the input of the pulse modulator (Fig. 2.6b). The pulse modulator generates a sequence of single pulses (Fig. 2.6c) with a period T and pulse duration t, and . The mathematical model of such a sequence of pulses can be described as a function:

(2.74)

Where k- pulse number in the sequence.

The output signal of the pulse modulator (Fig. 2.6d) can be represented as:

.

In practice, it is desirable to have a frequency representation of the pulse train. To do this, the function, as periodic, can be represented as a Fourier series:

, (2.75)

- spectral coefficients of expansion into a Fourier series; (2.76)

Pulse repetition rate;

n- harmonic number.

Substituting relation (2.74) into expression (2.76), we find:

.

Substituting (2.76) into (2.74), we get:

(2.78)

Let's transform the difference of sines, then

. (2.79)

Let us introduce the phase designation n th harmonics

. (2.81)

Thus, a sequence of single pulses contains, along with a constant component, an infinite number of harmonics with decreasing amplitude. Amplitude k The th harmonic is determined from the expression:

Digital signal processing involves time sampling (quantization), that is, the conversion of a continuous signal into a sequence of short pulses. As shown above, any pulse sequence has a rather complex spectrum, so a natural question arises as to how the time sampling process affects frequency spectrum original continuous signal.

To investigate this issue, consider mathematical model the time sampling process shown in Fig. 2.7a.

A pulse modulator (PM) is represented as a modulator with a carrier in the form of an ideal sequence of very short pulses (sequence d-functions), the repetition period of which is equal to T(Fig. 2.7b).

A continuous signal is received at the input of the pulse modulator (Fig. 2.7c), and a pulse signal is generated at the output (Fig. 2.7d).


Then the ideal sequence model d-functions can be described by the following expression

3 Correlation analysis signals

Meaning spectral analysis signals is to study how a signal can be represented as a sum (or integral) of simple harmonic oscillations and how the shape of the signal determines the structure of the frequency distribution of the amplitudes and phases of these oscillations. In contrast, the task of signal correlation analysis is to determine the degree of similarity and difference between signals or time-shifted copies of the same signal. The introduction of a measure opens the way to quantitative measurements of the degree of similarity of signals. It will be shown that there is a certain relationship between the spectral and correlation characteristics of signals.

3.1 Autocorrelation function (ACF)

The autocorrelation function of a signal with finite energy is the value of the integral of the product of two copies of this signal, shifted relative to each other by a time τ, considered as a function of this time shift τ:

If a signal is defined over a finite time interval, then its ACF is found as:

,

where is the overlap interval of shifted copies of the signal.

It is believed that the greater the value of the autocorrelation function for a given value, the more similar two copies of the signal, shifted by a period of time, are to each other. Therefore, the correlation function is a measure of similarity for shifted copies of the signal.

The similarity measure introduced in this way for signals that have the form of random oscillations around a zero value has the following characteristic properties.

If shifted copies of the signal oscillate approximately in time with each other, then this is a sign of their similarity and the ACF takes on large positive values ​​(large positive correlation). If the copies oscillate almost in antiphase, the ACF takes on large negative values ​​(anti-similarity of signal copies, large negative correlation).

The maximum ACF is achieved when the copies coincide, that is, in the absence of a shift. Zero ACF values ​​are achieved at shifts at which neither similarity nor anti-similarity of signal copies is noticeable (zero correlation,



no correlation).

Figure 3.1 shows a fragment of the implementation of a certain signal over a time interval from 0 to 1 s. The signal oscillates randomly around zero. Since the signal's existence interval is finite, its energy is also finite. Its ACF can be calculated according to the equation:

.

The autocorrelation function of the signal, calculated in MathCad in accordance with this equation, is presented in Fig. 3.2. The correlation function shows not only that the signal is similar to itself (shift τ = 0), but also that copies of the signal shifted relative to each other by approximately 0.063 s (lateral maximum of the autocorrelation function) also have some similarity. In contrast, copies of the signal shifted by 0.032 s should be anti-similar to each other, that is, in some sense opposite to each other.

Figure 33 shows pairs of these two copies. From the figure you can see what is meant by similarity and antisimilarity of signal copies.

The correlation function has the following properties:

1. At τ = 0, the autocorrelation function takes on the largest value equal to the signal energy

2. The autocorrelation function is an even function of the time shift .

3. As τ increases, the autocorrelation function decreases to zero

4. If the signal does not contain discontinuities of the type δ - functions, then it is a continuous function.



5. If the signal is an electrical voltage, then the correlation function has dimension .

For periodic signals in the definition of the autocorrelation function, the same integral is further divided by the signal repetition period:

.

The introduced correlation function has the following properties:

The value of the correlation function at zero is equal to the signal power,

The dimension of the correlation function is equal to the square of the signal dimension, for example.

For example, let's calculate the correlation function of a harmonic oscillation:

Using a series of trigonometric transformations, we finally obtain:

Thus, the autocorrelation function of a harmonic oscillation is a cosine wave with the same period of change as the signal itself. With shifts that are multiples of the oscillation period, the harmonic is converted into itself and the ACF takes on the largest values, equal to half the square of the amplitude. Time shifts that are multiples of half the oscillation period are equivalent to a phase shift by an angle , in this case the sign of the oscillations changes, and the ACF takes on a minimum value, negative and equal to half the square of the amplitude. Shifts that are multiples of a quarter of a period transform, for example, a sinusoidal oscillation into a cosine oscillation and vice versa. In this case, the ACF goes to zero. Such signals, which are in quadrature relative to each other, from the point of view of the autocorrelation function turn out to be completely different from each other.

It is important that the expression for the correlation function of the signal does not include its initial phase. Phase information is lost. This means that the signal itself cannot be reconstructed from the correlation function of the signal. Mapping versus mapping is not one-to-one.

If by a signal generation mechanism we mean a certain demiurge who creates a signal according to his chosen correlation function, then he could create a whole set of signals (an ensemble of signals) that actually have the same correlation function, but differ from each other in phase relationships.

The act of a signal manifesting its free will, independent of the will of the creator (the emergence of individual implementations of some random process),

The result of extraneous violence against the signal (the introduction into the signal of measurement information obtained during measurements of any physical quantity).

The situation is similar with any periodic signal. If a periodic signal with a main period T has an amplitude spectrum and a phase spectrum, then the correlation function of the signal takes the following form:

.

Already in these examples there is some connection between the correlation function and the spectral properties of the signal. These relationships will be discussed in more detail later.

3.2 Cross-correlation function (CCF).

In contrast to the autocorrelation function, the cross-correlation function determines the degree of similarity of copies of two different signals x(t) and y(t), shifted by time τ relative to each other:

The cross-correlation function has the following properties:

1. At τ = 0, the cross-correlation function takes on a value equal to mutual energy signals, that is, the energy of their interaction

.

2. For any τ the following relation holds:

,

where is the signal energy.

3. Changing the sign of the time shift is equivalent to a mutual rearrangement of the signals:

.

4. As τ increases, the cross-correlation function, although not monotonically, decreases to zero

5. The value of the cross-correlation function at zero does not stand out among other values.

For periodic signals, the concept of a cross-correlation function is, as a rule, not used at all.

Devices for measuring the values ​​of autocorrelation and cross-correlation functions are called correlometers or correlators. Correlometers are used, for example, to solve the following information and measurement tasks:

Statistical analysis of electroencephalograms and other results of registration of biopotentials,

Determination of the spatial coordinates of the signal source by the magnitude of the time shift at which the maximum CCF is achieved,

Isolation of a weak signal against the background of strong static unrelated interference,

Detection and localization of information leakage channels by determining the correlation between radio signals indoors and outdoors,

Automated near-field detection, recognition and search for operating radio-emitting listening devices, including mobile phones, used as listening devices,

Localization of leaks in pipelines based on determining the VKF of two acoustic noise signals caused by a leak at two measurement points where sensors are located on the pipe.

3.3 Relationships between correlation and spectral functions.

Both correlation and spectral functions describe internal structure signals, their internal structure. Therefore, we can expect that there is some interdependence between these two ways of describing signals. You have already seen the presence of such a connection in the example of periodic signals.

The cross-correlation function, like any other function of time, can be subjected to a Fourier transform:

Let's change the order of integration:

The expression in square brackets could be thought of as a Fourier transform of the signal y(t), but there is no minus sign in the exponent. This suggests that the inner integral gives us an expression that is complex conjugate to the spectral function.

But the expression does not depend on time, so it can be taken out of the sign of the external integral. Then the outer integral will simply give us the definition of the spectral function of the signal x(t). Finally we have:

This means that the Fourier transform for the cross-correlation function of two signals is equal to the product of their spectral functions, one of which is subjected to complex conjugation. This product is called the mutual spectrum of the signals:

An important conclusion follows from the obtained expression: if the spectra of the signals x(t) and y(t) do not overlap each other, that is, they are located in different frequency ranges, then such signals are uncorrelated and independent of each other.

If we put in the given formulas: x(t) = y(t), we obtain an expression for the Fourier transform of the autocorrelation function

This means that the autocorrelation function of a signal and the squared modulus of its spectral function are related to each other through the Fourier transform.

The function is called energy spectrum signal The energy spectrum shows how the total energy of a signal is distributed among the frequencies of its individual harmonic components.

3.4 Energy characteristics frequency domain signals

The cross-correlation function of two signals is related by the Fourier transform to the mutual spectrum of the signals, so it can be expressed as the inverse Fourier transform of the cross spectrum:

.

Now let’s substitute the value of the time shift into this chain of equalities. As a result, we obtain a relation that determines the meaning Rayleigh equality:

,

that is, the integral of the product of two signals is equal to the integral of the product of the spectra of these signals, one of which is subjected to the operation of complex conjugation.

.

This ratio is called Parseval's equality.

Periodic signals have infinite energy but finite power. When considering them, we have already encountered the possibility of calculating the power of a periodic signal through the sum of the squares of the moduli of the coefficients of its complex spectrum:

.

This relation is completely analogous to Parseval’s equality.

Review