• Probabilistic characteristics of a fuzzy signal. Probabilistic characteristics of signals. Characteristics of stationary random signals

    29.06.2020

    Since all information signals and noise are random and can be predicted only with a certain degree of probability, probability theory is used to describe such signals. In this case, statistical characteristics are used, which are obtained by conducting numerous experiments under the same conditions.

    All random phenomena studied by probability theory can be divided into three groups:
    — random events;
    — random variables;
    - random processes.

    Random event is any fact that may or may not happen as a result of experience.
    A random event is the appearance of interference at the receiver input or the reception of a message with an error.
    Random events are designated by the Latin letters A, B, C.

    The numerical characteristics of a random event are:
    1. Frequency of occurrence of a random event:

    where m is the number of experiments in which this event occurred;
    N is the total number of experiments performed.

    As follows from expression (40), the frequency of occurrence of a random event cannot exceed 1, since the number of experiments in which this event occurred cannot exceed the total number of experiments performed.
    2. Probability of a random event occurring:

    That is, the probability of a random event occurring is the frequency of its occurrence with an unlimited increase in the number of experiments performed. The probability of an event occurring cannot exceed 1. A random event with a probability equal to one is reliable, i.e. it will definitely happen, therefore events that have already occurred have such a probability.
    Random value is a quantity that changes randomly from experiment to experiment.
    A random variable is the amplitude of the interference at the receiver input or the number of errors in the received message. Random variables are denoted by the Latin letters X, Y, Z, and their values ​​are x, y, z.
    Random variables can be discrete or continuous.
    Discrete is a random variable that can take a finite set of values ​​(for example, the number of equipment, the number of telegrams, etc., since they can only take the integer 1, 2, 3, ...).
    Continuous is a random variable that can take any values ​​from a certain range (for example, the amplitude of interference at the receiver input can take any values, just like an analog information signal can take any values).

    Numerical, statistical characteristics describing random variables are:
    1.Probability distribution function.

    F(x)=P(X ? x) (42)

    This function shows the probability that the random variable X will not exceed a specifically selected value x. If the random variable X is discrete, then F(x) is also a discrete function, if X is a continuous variable, then F(x) ? continuous function.
    2. Probability density function.

    P(x)=dF(x)/dx (43)

    This characteristic shows the probability of the value of a random variable falling into a small interval dx in the vicinity of point x’, i.e., in the shaded area (figure).

    3. Expected value.

    where xi are the values ​​of the random variable;
    P(xi) is the probability of occurrence of these values;
    n is the number of possible values ​​of the random variable.

    where p(x) is the probability density of a continuous random variable.

    In its meaning, the mathematical expectation shows the average and most probable value of a random variable, i.e. this value is most often taken by a random variable. Expression (44) is applied if the random variable is discrete, and expression (45) if it is continuous. The notation M[X] is special for the mathematical expectation of the random variable that is indicated in square brackets, but the notation mх or m is sometimes used.

    4. Dispersion.

    Dispersion quantitatively characterizes the degree of scattering of the results of individual experiments relative to the average value. The notation for the variance of the random variable D[X] is generally accepted, but the notation ??х can also be used. Expression (46) is used to calculate the variance of a discrete random variable, and (47) is used to calculate the variance of a continuous random variable. If you take the square root of the variance, you get a value called the standard deviation (?x).

    All characteristics of a random variable can be shown using Figure 22.

    Figure 22 - Characteristics of a random variable

    Random process is a function of time t, the value of which for any fixed value of time is a random variable. For example, Figure 23 shows a diagram of some random process observed as a result of three experiments. If we determine the value of the functions at a fixed time t1, then the resulting values ​​will turn out to be random variables.

    Figure 23 - Ensemble of implementations of a random process

    Thus, the observation of any random variable (X) in time is a random process X(t). For example, information signals (telephone, telegraph, data transmission, television) and noise (narrowband and broadband) are considered as random processes.
    A single observation of a random process is called implementation xk(t). The set of all possible realizations of one random process is called an ensemble of realizations. For example, Figure 23 shows an ensemble of realizations of a random process, consisting of three realizations.

    To characterize random processes, the same characteristics are used as for random variables: probability distribution function, probability density function, mathematical expectation and dispersion. These characteristics are calculated in the same way as for random variables. There are random processes various types. However, in telecommunications, most random signals and noise are stationary ergodic random processes.

    A stationary process is a random process whose characteristics F(x), P(x), M[X] and D[X] do not depend on time.
    Ergodic is a process in which time averaging of one of the implementations leads to the same results as static averaging over all implementations. Physically, this means that all implementations of an ergodic process are similar to each other, therefore measurements and calculations of the characteristics of such a process can be carried out using one (any) of the implementations.
    In addition to the four characteristics given above, random processes are also described by the correlation function and power spectral density.

    The correlation function characterizes the degree of relationship between the values ​​of a random process in various moments time t and t+?. Where? time shifting.

    where tн is the observation time of the implementation xk(t).

    Power Spectral Density— shows the power distribution of a random process by frequency.

    where?P is the power of the random process per frequency band?f.

    So the observation random phenomenon in time is a random process, its occurrence is a random event, and its value is a random variable.

    For example, observing a telegraph signal at the output of a communication line for some time is a random process, the appearance of its discrete element “1” or “0” at reception is a random event, and the amplitude of this element is a random variable.

    MINISTRY OF EDUCATION AND SCIENCE OF THE RF

    NOVOSIBIRSK STATE TECHNICAL
    UNIVERSITY

    FACULTY OF AUTOMATION AND COMPUTER ENGINEERING

    Department of Data Collection and Processing Systems

    LABORATORY WORK No. 12

    RANDOM SIGNALS AND THEIR CHARACTERISTICS

    Group: AT-73 Teacher: Assoc. Shchetinin Yu.I.

    Student: Vitenkova S.E.

    Novosibirsk

    Goal of the work: studying the basic characteristics of stationary random signals (average value, autocorrelation function, power spectral density) and acquiring practical skills in their calculation and analysis in the Matlab environment.

    1. Generation of 500 random signal samplesX with zero mathematical expectation and unit variance and calculating estimates of the mean and variance forX .

    Let's use the following script file to generate 500 samples of a random signal X with zero mathematical expectation and unit variance and plotting X.

    The resulting graph is shown in Fig. 1.

    Rice. 1. Random signal graph X.

    Random processes can be characterized by mathematical expectation and dispersion. The mathematical expectation is the average value of a random variable, and the dispersion characterizes the scattering of the signal relative to its average value.

    These characteristics can be approximately determined by knowing N signal samples using expressions (1) and (2).

    (1)

    (2)

    Let's use custom functions dispersiya() And ozhidanie() to determine estimates of the mathematical expectation and dispersion from expressions (1) and (2).

    function D = dispersiya(y)

    % variance

    m = ozhidanie(y);

    D = sum((y - m).^2)/(length(y)-1);

    function m = ozhidanie(y)

    % expected value

    m = sum(y)/length(y);

    We get the rating values:

    During generation, zero mathematical expectation and unit variance were specified. We see that the obtained assessment values ​​are close to the specified ones. The reason for their incomplete agreement is that a finite sample of N samples, and the estimates converge to the true values ​​at .

    2. Plotting a probability density plot and a signal histogramX .

    Using the following script file, we will construct a graph of the probability density of a normal random variable (using expression (3)) and a graph of the signal histogram X using the function hist() .

    (3)

    f = (exp(-(x-m).^2/(2*D)))/(sqrt(2*pi*D));

    title("Probability density graph");

    set(gca,"FontName", "Times New Roman","FontSize", 10);

    title("Histogram of random signal X");

    The resulting graphs are presented in Fig. 2.

    Rice. 2. Distribution density graph

    probabilities and histograms.

    We see that the histogram of the random signal X is similar in shape to the probability density graph. They do not coincide completely, because... to construct the histogram, a finite sample of N counts. The histogram converges to the probability density plot at .

    3. Determining the ACF of the system output signal analytically and using the functionconv().

    One of the characteristics of a random signal is its autocorrelation function (ACF), which is determined by expression (4).

    ACF determines the degree of dependence of signal samples separated from each other by an interval m.

    White noise is a random process whose ACF is equal to zero for any , i.e. values ​​separated by interval m do not depend on each other. The ACF of white noise at is determined by expression (5).

    The relationship between the ACF of the discrete output and input signals of the system is determined by the expression

    Using expression (6), we determine the ACF of the output signal of the system with the equation when white noise is applied to the input of the system.

    Let us determine the impulse response of a given system by applying a single delta pulse to its input.

    Rice. 3. Graphs , , .

    When the ACF of white noise is equal to . Convolution of any signal with a unit pulse gives the original signal, which means .

    Using the geometric meaning of the convolution operation, we find .

    Rice. 4. ACF graph of the system output signal

    when white noise is applied to the input.

    We see that, in comparison with the ACF of the input signal, non-zero components have appeared in the output signal at , i.e. the output signal is a correlated process in contrast to the input white noise.

    Let us determine the ACF of the system output signal when a random signal is applied to the input X, defined in clause 1.

    Estimation of ACF signal X can be determined by the expression

    The ACF estimate determined by expression (7) can be calculated using the function xcorr() Matlab. Using this function, we find an estimate of the ACF of the signal X and plot this assessment.

    Xcorr(X, "biased");

    stem(lags, Kxx);

    set(gca,"FontName", "Times New Roman Cyr", "FontSize", 10)

    title("ACF estimate of signal X");

    Rice. 5. Graph for estimating the ACF of a random signal X.

    We see that the signal evaluation X The ACF is close to the ACF of white noise (Fig. 3), which means that the relationship between different signal values X small The presence of components at is explained by the finiteness of the sample.

    Using the function conv() Matlab, let's determine the ACF of the output signal using expression (6).

    h1 = ;

    h2 = ;

    c = conv(h1,h2);

    Kyy = conv(c, Kxx);

    stem(-(N+3):(N+3), Kyy)


    Rice. 6. ACF of the output signal when a signal is applied to the input X.

    In the enlarged fragment of Fig. 6 you can see that the ACF values ​​of the output signal with the input signal X are close to the ACF values ​​of the output signal when white noise is applied to the input (Fig. 4).

    Using the following sequence of commands, we will plot the ACF graphs of the input and output signals to compare them.

    stem(lags, Kxx);

    set(gca,"FontName", "Times New Roman Cyr", "FontSize", 10)

    title("ACF estimate of signal X");

    stem(-(N+3):(N+3), Kyy)

    set(gca,"FontName", "Times New Roman Cyr", "FontSize", 10)

    title("ACF of output signal");

    Rice. 7. ACF graphs of input and output filter signals.

    In Fig. 7 we see that the output signal is more correlated than the input, because there is a larger number of non-zero components and there is a dependence between the values ​​of the output signal.

    4. Plotting Output ScatterplotsY systems.

    The information transmitted over a communication channel or extracted as a result of a measurement is contained in the signal.

    Before receiving a message (before testing), the signal should be considered as a random process, which is a set (ensemble) of time functions that obey some statistical pattern common to them. One of these functions, which becomes fully known after receiving the message, is called the implementation of a random process. This realization is no longer random, but a deterministic function of time.

    An important, but not exhaustive, characteristic of a random process is its inherent one-dimensional probability distribution law.

    In Fig. 4.1 shows a set of functions that form a random process. The values ​​that individual functions can take at the moment of time form a set of random variables

    Rice. 4.1. A set of functions that form a random process

    The probability that a measurement value falls within a given interval (Fig. 4.1) is determined by the expression

    The function represents the differential distribution law of a random variable and is called a one-dimensional probability density, and is called an integral probability.

    The function makes sense for random continuous types that can take any value in a certain interval. Whatever the nature of the function, the equality must be satisfied

    where are the boundaries of possible values

    If it is a random variable of a discrete type and can take any of a finite number of discrete values, then (4.2) should be replaced by the sum

    where is the probability corresponding to the value .

    Setting a one-dimensional probability density allows you to perform statistical averaging of both the value itself and any function. By statistical averaging we mean averaging over a set (over an ensemble) in some “section” of the process, i.e. at a fixed point in time.

    For practical applications, the following parameters of the random process are of greatest importance:

    expected value

    dispersion

    standard deviation

    One-dimensional probability density is not sufficient for full description process, since it gives a probabilistic representation of the random process X(t) only at certain fixed moments in time.

    A more complete characteristic is a two-dimensional probability density that allows one to take into account the relationship between the values ​​​​accepted by a random function at arbitrarily selected points in time

    An exhaustive probabilistic characteristic of a random process is the -dimensional probability density for sufficiently large n. However big number problems related to the description of random signals can be solved on the basis of two-dimensional probability density.

    Specifying a two-dimensional probability density allows, in particular, to determine important characteristic random process - covariance function

    According to this definition, the covariance function of a random process is a statistically averaged product of the values random function in moments

    For each realization of a random process, the product is a certain number. The set of realizations forms a set of random numbers, the distribution of which is characterized by a two-dimensional probability density. For a given function, the operation of averaging over the set is carried out according to the formula

    When a two-dimensional random variable degenerates into a one-dimensional variable, we can therefore write

    Thus, with a zero interval between moments of time, the covariance function determines the value of the mean square of the random process at the moment

    When analyzing random processes, the main interest is often its fluctuation component. In such cases, the correlation function is used

    Substituting in instead of instead you can get the following expression:

    When expression (4.8) in accordance with (4.4) determines the dispersion of the random process Therefore,

    The study of a random process, as well as its impact on radio circuits, is significantly simplified when the process is stationary.

    A random process is called strictly stationary if its probability density random order depends only on the intervals and does not depend on the position of these intervals in the region of the argument change

    In radio engineering applications of the theory of random processes, the stationarity condition is usually limited to the requirement that only one-dimensional and two-dimensional probability densities are independent of time (a random process, stationary in the broad sense). The fulfillment of this condition allows us to assume that the mathematical expectation, mean square and variance of the random process do not depend on time, and the correlation function does not depend on the moments of time themselves, but only on the interval between them

    The stationarity of the process in a broad sense can be interpreted as stationarity within the framework of the correlation theory (for moments not higher than the second order).

    Thus, for a random process that is stationary in the broad sense, the previous expressions can be written without indicating fixed points in time. In particular,

    Further simplification of the analysis of random processes is achieved by using the condition of ergodicity of the process. A stationary random process is called ergodic if, when determining any statistical characteristics averaging over many implementations is equivalent to averaging over time one theoretically infinitely long implementation.

    The condition for the ergodicity of a random process also includes the condition for its stationarity. In accordance with the definition of an ergodic process, the relations are equivalent to the following expressions, in which the operation of averaging over time is indicated by a line:

    If it is an electrical signal (current, voltage), then is the constant component of the random signal, and is the average power of the signal fluctuation [relative to the constant component x(t)].

    Expression (4.15) externally coincides with the definition (2.131) of the correlation function of a deterministic signal (periodic).

    The normalized correlation function is often used

    Functions characterize the relationship (correlation) between values ​​separated by an interval. The slower and smoother it changes over time, the larger the interval within which a statistical relationship is observed between the instantaneous values ​​of the random function.

    In the experimental study of random processes, the time correlation characteristics of the process (4.15)-(4.19) are used, since, as a rule, the experimenter is able to observe one implementation of a signal, and not many of its implementations. Integration is carried out, naturally, not over infinite limits, but over a finite interval T, the length of which should be greater, the higher the requirement for the accuracy of the measurement results.


    The mathematical model of the process of transmitting measuring information is a model of a random process with a probability density. Useful signals and interference signals acting on information-measuring systems are random processes that can be characterized by statistical average values ​​and characteristics.

    A random process is a more complex random phenomenon than a random variable, but its definition can be given through a random variable. A function (Fig. 4) is called a random process if its instantaneous values ​​are random variables. Just as a random variable cannot be characterized by a single value, a random process cannot be defined by any one, albeit complex, function. A random process is a set of implementations (functions of time). Implementation xi(t)– fragment of a random process X(t), recorded as a result i-th experiment of limited duration T Therefore, realization is understood as one of the possible outcomes of a random process. Random variable corresponding i-th implementation and j th moment of time, is an instantaneous (sample) value - a special case of a random process, and the probabilistic characteristics of a random process are based on the characteristics of random variables included in this process. A set of instantaneous values ​​corresponding to the values ​​of different implementations at the same point in time t j, called j th sequence of the process X(t). When solving applied problems, one often turns to implementations rather than to sequences.

    Experimentally, an ensemble of realizations of a random process can be obtained as a result of simultaneous registration of output parameters xi(t) at the outputs of objects of the same type, for example, measuring instruments, for a fixed time interval.

    If the argument t changes continuously, dependence X(t) is continuous random process(for example, a change in the error of a measuring device over a long time of its operation), if the argument t is a discrete quantity - a random sequence or time series(an array of error measurement results at known times). Process X(t) taking a countable limited number of values ​​is called discrete random process(for example, the sequence of operating states of equipment of information-measuring systems or information-computing complexes).

    By defining a random process by random variables, the probabilistic characteristics of the processes are found based on the probabilistic characteristics of these variables.

    Fig.4. Graphic representation of a random process

    The most complete description of a random process is the integral probability distribution function

    and differential probability distribution function

    In probability distribution functions of random processes, in contrast to multidimensional probability distribution functions of random variables to the arguments x i variables are added t j, showing at what points in time the readings were taken.

    For an approximate description of random processes, as well as for the description of random variables, numerical characteristics such as mathematical expectation, dispersion, etc. are used. Moreover, these numerical characteristics are also functions of time.

    The most commonly used probabilistic characteristics are.

    1M athematic expectation ,

    the estimate of the mathematical expectation of a random function is its average value.

    2. D dispersion– non-random function

    where is a centered random process; dispersion values ​​for each t j equal to the variance of the random variable x i (t j).

    The variance of a random function can be found through the differential probability distribution function of the random function

    The variance estimate is its empirical value

    Random processes with the same mathematical expectations and variances can differ significantly in shape (Fig. 4).

    3. Autocorrelation function characterizes the statistical relationship between instantaneous values ​​of a random process at different points in time. The smaller the value of the autocorrelation function, the less dependent the value of the measuring signal at time t 1 is on the value at time t 2. . Determined by one of the following relations

    Where t 1, t 2 – fixed moments of time at which the cross sections of the random function are determined.

    Since when t 1 =t 2, for the same sections the correlation function turns into the variance of the random function.

    For each pair of moments in time, the autocorrelation function is equal to the correlation moment, the statistical estimate of which is

    In the formulas that determine empirical estimates of the variance and correlation function, the number of realizations n is reduced by one to obtain an unbiased estimate;

    4. Cross correlation function determines the statistical relationship between two signals X(t) And Y(t+τ)

    The study of the properties of random processes using correlation functions is called the correlation theory of random processes.

    5. Spectral Density- a non-random function that establishes the distribution density of its dispersion over frequency ω, is equal to the Fourier transform of the corresponding correlation function

    The correlation function can be expressed in terms of spectral density by an inverse Fourier transform type relation.

    The relations that allow the transformation of spectral density into a correlation function and vice versa are called the Khinchin-Wiener theorem.

    The properties of random signals are assessed using statistical(probabilistic) characteristics. They represent non-random functions and (or) numbers, knowing which, one can judge the patterns that are inherent in random signals, but appear only with their repeated observations.

    7.4.1. Characteristics of random signals that do not change over time

    The main statistical characteristics of the signal represented by the random variable (7.2) are: distribution function
    , probability distribution density
    (PRV), mathematical expectation , variance , standard deviation (RMS) and confidence interval .


    , (7.64)

    Let's look at these characteristics.
    Where .


    . (7.65)

    - symbol of event probability
    Dimension of PRV .


    , (7.66)

    reciprocal of the dimension of the quantity The result of calculations using this formula differs from average value

    random variable and coincides with it only in the case of symmetric distribution laws (uniform, normal and others).

    4. Dispersion The quantity is called a centered random variable. The mathematical expectation of such a value is zero.

    (7.67)

    random variable determines the weighted average of the squared deviation of this variable from its mathematical expectation. The variance is calculated using the formula

      and has a dimension coinciding with the dimension of the square of the quantity Standard deviation

    calculated by the formula , has a dimension that matches the dimension of the physical quantity being measured. Therefore, the standard deviation turns out to be a more convenient indicator of the degree of dispersion of possible values ​​of a random variable relative to its mathematical expectation.

    In accordance with the “three sigma” rule, almost all values ​​of a random variable with normal distribution law, fall within the interval
    , adjacent to the mathematical expectation of this quantity.

    6. Confidence interval is the range of possible values ​​of a random variable in which this value is located with a predetermined confidence probability . This range can be written as
    , or in the form

    those. the boundaries of the confidence interval are located symmetrically relative to the mathematical expectation of the signal, and the area of ​​the curvilinear trapezoid with the base
    equal to confidence probability (Fig. 7.7). With growth confidence interval also increases.

    Half confidence interval can be determined by solving the equation

    . (7.70)

    In the practice of engineering calculations, the most widely used among the listed statistical characteristics of a random signal is the PDF
    .
    Knowing the PDF, you can determine all other statistical characteristics of the signal. Therefore the function is full statistical characteristics

    random signal.


    2.
    Let us point out the main properties of the PDF:
    And
    , i.e., knowing the PDF
    , we can determine the distribution function of the random variable

    , (7.71)

    and, conversely, knowing the distribution function, one can determine the PDF; this implies normalization condition

    . (7.72)

    PRV
    since the probability of an event
    equal to one. If all possible values ​​of the measured random variable occupy the interval

    , (7.73)

    , then the condition for normalizing the PDF has the form
    In any case, the area of ​​the curvilinear trapezoid formed by the PDF graph is equal to one.

    This condition can be used to determine the analytical form (formula) of the PDF

    The measurement process is characterized by the presence of many random variables and events involved in the formation of the measurement result. In addition to the measured value itself, this includes non-informative parameters of the control object, parameters of the measuring instrument, environmental parameters, and even the state of the consumer of the measurement information. Their combined influence on the measurement result is expressed in the fact that this result, obtained again under (seemingly) unchanged measurement conditions, differs from the previous result. By carrying out repeated measurements and accumulating data (statistics), one can, firstly, get an idea of ​​the degree of scatter of measurement results and, secondly, try to find out the influence of each factor on the error of the measurement result.

    If several are considered (two or more) random variables, then they form system of random variables. In addition to the characteristics listed above, such a system for each random variable separately has additional characteristics, allowing one to assess the level of statistical connections between all random variables forming the system. These characteristics are correlation points(covariances) for each pair of random variables, . They are calculated using the formula

    , (7.74)

    Let's look at these characteristics.
    -two-dimensional PDF systems of two random variables and (with mathematical expectations, respectively), characterizing joint distribution these quantities.

    In the absence of a statistical connection between the quantities and the corresponding correlation moment is equal to zero (i.e.
    ). Such random variables are called statistically independent.

    When performing mathematical operations with random variables that have known statistical characteristics, it is important to be able to determine the statistical characteristics of the results of these operations. Below such characteristics are given for the simplest mathematical operations:

    If the quantities are statistically independent, then .

    those. the variance of the sum of independent random variables is equal to the sum of the variances of these variables. In table 7.2. formulas are given to determine the characteristics of the sum two
    random variables. In this case , , and the dispersion and RMS
    the summation results significantly depend on the value of the relative correlation coefficient of the summed values
    .

    , Where

    Table 7.2.

    Statistical characteristics of the sum of two random variables

    Relative

    coefficient

    correlations

    Dispersion

    RMS
    Equality
    . If the signs of changes in these quantities are always opposite to each other, then
    . Finally, if the quantities have finite variances and are statistically independent of each other, then
    . The converse is true only for normally distributed random variables.

    If the quantities are statistically independent, then

    , .

    ,

    Likewise, if
    - known function two continuous random variables whose joint (two-dimensional) PDF
    is known, then the mathematical expectation and variance of such a random variable can be determined by the formulas

    , (7.80)

    All previous formulas for calculating the results of mathematical operations with random variables can be obtained from these general formulas.

    7.4.3. Typical distributions of random signals

    Let us consider the statistical characteristics of continuous random variables having typical distributions.

    7.4.3.1. Uniform distribution.

    In the case of uniform distribution, the random variable (7.2) with the same probability density falls into each point of the limited interval. PRV
    and distribution function
    such a random variable has the form (Fig. 7.8)


    (7.81)


    Other (particular) statistical characteristics of such a random variable can be calculated using the formulas

    ,
    ,
    ,
    . (7.82)

    7.4.3.2. Triangular distribution (Simpson distribution)

    In this case, the PDF graph has the shape of a triangle with its vertex at the point
    , and the graph of the integral distribution law represents a smooth conjugation of two parabolas at the point
    , Where,
    ,
    (Fig. 7.9).


    (7.83)

    The mathematical expectation and variance of such a random variable can be calculated using the formulas

    ,
    . (7.84)

    If
    , then the Simpson distribution becomes symmetrical. In this case

    ,
    ,
    ,
    . (7.85)

    7.4.3.3. Normal distribution (Gaussian distribution)

    The normal distribution refers to one of the most common distributions of random variables. This is partly due to the fact that the distribution of the sum of a large number of independent random variables with different distribution laws, often encountered in practice, approaches the normal distribution. In this case, the PDF and distribution function have the form

    ,
    . (7.86)

    The standard deviation and the mathematical expectation of such a value coincide with the parameters
    distributions, i.e.
    ,.

    Confidence interval is not expressed through elementary functions, but can always be found from equation (7.70). The result of solving this equation for a given confidence probability value can be written in the form
    the summation results significantly depend on the value of the relative correlation coefficient of the summed values
    - quantile, the value of which depends on the confidence level .

    There are tabular function values
    . Here are some of them:

    ,
    ,
    ,
    ,
    ........

    This shows that with a fairly high probability (
    ) almost all values ​​of a random variable with a normal distribution fall into the interval
    , having a width
    . This property forms the basis of the “three sigma” rule.

    In Fig. Figure 7.10 shows graphs of the PDF and the integral law of normal distribution for two different values ​​of the standard deviation (
    ) and the same mathematical expectation.

    It can be seen that the PDF graph is a single-humped “resonance” curve with a maximum at the point
    , located symmetrically relative to the mathematical expectation. This curve is “sharper” the smaller the standard deviation. Accordingly, the smaller the spread of possible values ​​of a random variable relative to its mathematical expectation. However, in all cases, the area of ​​the curvilinear trapezoid bounded by the PDF graph is equal to unity (see (7.72)).

    In probability theory, in addition to the characteristics discussed above, other characteristics of a random variable are also used: characteristic function, kurtosis, counter-kurtosis, quantile estimates, etc. However, the characteristics considered are quite sufficient for solving most practical problems of measurement technology. Let us show an example of solving such a problem.

    Example 7.4: It is required to determine parameter A (vertex coordinate) of the probability density distribution of a random measuring signal, the graph of which is shown in Fig. 7.11 (assuming that only the form is known this chart).

    It is also required to determine the probability that the magnitude (modulus) of the signal will be greater than its standard deviation, i.e. it is necessary to determine the probability of an event
    .

    Solution: Parameter value A we determine from the normalization condition for PDF (7.73), which in in this case looks like

    .

    Here the first term corresponds to the area of ​​the rectangle lying in Fig. 7.11 under the PRV chart to the left dotted line
    , the second is the area of ​​a right triangle lying to the right this line. From the resulting equation we find
    . Taking this result into account, the probability density function can be written as

    Now you can calculate the mathematical expectation, dispersion and standard deviation of the signal. Using formulas (7.66), (7.67) and (7.68), we obtain, respectively: In Fig. 7.11 dash-dot lines show the boundaries of the interval
    .

    In accordance with the normalization condition (7.71), the desired probability is equal to the sum of the areas under the PDF graph located to the left of the point
    (in this example this area is zero) and to the right of the point
    , i.e.

    .

    7.4.4. Characteristics of random signals varying over time

    A random signal that varies over time generally contains deterministic (systematic) and centered random (fluctuation) components, i.e.

    . (7.87)

    In Fig. 7.12 shows the graph one from a number of possible realizations of such a signal. The dotted line shows its deterministic component
    , near which all other signal realizations are grouped and around which they oscillate.

    A complete picture of the characteristics of such a signal is given by the general (complete) set of all its implementations. In practice it is always finite. Therefore, the characteristics of a random signal found experimentally should be considered estimates of its actual characteristics.

    At each moment of time (i.e. in each section of the signal), the values ​​of the random time function (7.87) represent a random variable
    with the corresponding statistical characteristics discussed above. In particular, the deterministic component of the random signal at each moment of time coincides with mathematical waiting corresponding random variable
    , i.e.

    , (7.88)

    Let's look at these characteristics.
    - one-dimensional PDF of a random process (7.87), which, in contrast to the PDF of a random variable (7.65) discussed above, depends not only on, but also on time.

    The degree of spread of realizations of a random signal relative to its systematic component (7.88) characterizes the maximum value of the modulus of the fluctuation component of the signal and is estimated by the value of the standard deviation of this component, which in the general case also depends on time

    . (7.89)

    Let's look at these characteristics.
    - dispersion of a random signal, calculated by the formula

    . (7.90)

    For each point in time, you can determine the confidence interval
    (see (7.70)), and then construct trust region, i.e. such an area in which the implementation of a random signal
    hit with a predetermined confidence probability (Fig. 7.13).


    The three characteristics considered (
    Let us point out the main properties of the PDF:
    ) is enough to get a general idea of ​​the properties of a random measuring signal (7.87). However, they are not enough to judge internal composition(spectrum) of such a signal.

    In Fig. 7.14, in particular, shows the implementation graphs of two various random signals with the same mathematical expectation
    , and the dispersion
    . The difference between these signals is expressed in the different spectral (frequency) composition of their implementations, i.e. in varying degrees of statistical connection between the values ​​of a random signal at two points in time Let us point out the main properties of the PDF:
    , separated from each other by an amount. For the signal shown in Fig. 7.16, A this connection is stronger than for the signal in Fig. 7.14, b.

    In the theory of random processes, such a statistical relationship is estimated using autocorrelation function random signal (ACF), which is calculated by the formula

    , (7.91)

    Let's look at these characteristics.
    -two-dimensional PRV signal.

    Distinguish stationary And non-stationary random signals. If the signal (7.87) is stationary, then its mathematical expectation (7.88) and dispersion (7.90) do not depend on time, and its ACF (7.91) does not depend on two arguments Let us point out the main properties of the PDF: , but only from one argument - the value of the time interval
    . For such a signal

    ,
    ,
    , Where
    . (7.92)

    In other words, a stationary random signal is homogeneous in time, i.e. its statistical characteristics do not change when the time reference point changes.

    If, in addition to stationarity, a random signal is also ergodic, That
    , and its autocorrelation function can be calculated using the formula

    , (7.93)

    which does not require knowledge of two-dimensional PDF
    since in this formula we can use any implementation signal. The dispersion of such a (stationary and ergodic) signal can be calculated using the formula

    , (7.94)

    A sufficient condition for the ergodicity of a random signal is that its ACF tends to zero
    with unlimited growth of the time shift.

    The ACF of a random signal is often normalized to its variance. In this case, dimensionless normalized ACF is calculated by the formula

    . (7.95)

    In Fig. Figure 7.15 shows a typical plot of such an ACF.

    Knowing this function, we can determine correlation interval , i.e. time after which the values ​​of a random signal can be read statistically independent apart from each other

    . (7.96)

    From this formula it follows that the area under the graph of the normalized ACF coincides with the area of ​​a rectangle of unit height, which has a double correlation interval at its base
    (see Fig. 7.15).

    Let us explain the physical meaning of the correlation interval. If information about the behavior of a centered random signal “in the past” is known, then its probabilistic forecast is possible for a time on the order of the correlation interval . However, the forecast of a random signal for a time exceeding the correlation interval will turn out to be unreliable, since the instantaneous values ​​of the signal, so “far” apart in time, are practically uncorrelated (i.e., statistically independent of each other).

    In the framework of the spectral-correlation theory of random processes, to describe the properties of a stationary random signal, it is enough to know only its ACF
    , or only energy spectrum signal
    .

    , (7.97)

    , (7.98)

    These two functions are related to each other by the Wiener–Khinchin formulas
    corresponds to a well-defined time shift function
    and vice versa, each ACF corresponds to a well-defined power spectral density of a stationary random signal. Therefore, knowing the energy spectrum of the fluctuation component
    random signal (7.87)
    , we can determine the ACF of this component
    and vice versa. This confirms that the frequency and correlation characteristics of a stationary random signal are closely related to each other.

    Properties of ACF of a random signal
    similar to the properties of the ACF of a deterministic signal
    .

    Autocorrelation function
    characterizes statistical connection between the values ​​of a stationary random signal at times separated from each other along the time axis by an amount. The smaller this connection, the smaller the corresponding ACF value. Energy spectrum
    characterizes the distribution along the frequency axis of the energies of the harmonic components of a random signal.

    Knowing the Energy Spectrum
    , or AKF
    fluctuation component of the signal (7.1)
    , you can calculate its dispersion and effective spectral width (frequency band) according to formulas

    , (7.99)

    , (7.100)

    Let's look at these characteristics.
    - ordinate of the maximum point on the graph of the function
    .

    Effective spectral width of the random spectrum of a random signal similar to active spectrum width
    deterministic signal, that is, like the latter, it determines the frequency range within which the overwhelming majority of the average signal power is concentrated (see (7.55)). Therefore, by analogy with (7.55), it can be determined from the relation

    . (7.101)

    Let's look at these characteristics. - a constant coefficient that determines the proportion of random signal power per frequency band
    (For example, = 0,95).

    In Fig. 7.16 provides a graphic illustration of formulas (7.100) and (7.101). In the first case, the frequency band coincides with the base of a rectangle with height
    and area
    (Fig. 7.19, A), in the second - with the base of a curvilinear trapezoid having an area
    (Fig. 7.16, b). The frequency band of a narrow-band random process is located in the region
    , Where - average frequency of the spectrum (Fig. 7.16, V), and is calculated from the relation

    .

    The effective spectral width of a random signal can be determined in many other ways. In any case, the quantities Let us point out the main properties of the PDF: must be related by a relation similar to the relation
    , which occurs for deterministic signals (see section 7.3.3).

    a B C

    Table 7.3 shows the spectral-correlation characteristics for three stationary random signals.

    The first paragraph of this table shows the characteristics of the so-called white noise - a specific random signal, the values ​​of which, located arbitrarily close to each other, are independent random variables. The ACF of white noise has the form - functions, and its energy spectrum contains harmonic components of any (including arbitrarily high) frequencies. The variance of white noise is an infinitely large number, i.e. the instantaneous values ​​of such a signal can be arbitrarily large, and its correlation interval is zero.

    Table 7.3.

    Characteristics of stationary random signals

    Autocorrelation

    Interval

    coefficient

    Energy spectrum

    The second paragraph of the table indicates the characteristics of low-frequency noise, and the third paragraph indicates the characteristics of narrowband noise. If
    , then these characteristics of these noises are close to each other.

    The random signal is called narrowband, if the frequency significantly less than the average frequency of the spectrum . A narrowband random signal can be written in the form (see (7.12)), where the functions
    Let us point out the main properties of the PDF:
    change much more slowly than the function
    .

    The properties of the spectral-correlation characteristics of a stationary random signal are similar to the properties of the amplitude spectrum and ACF of a deterministic signal. In particular,
    Let us point out the main properties of the PDF:
    - even functions,
    etc. There are also differences. The difference between the correlation functions is that the ACF of the deterministic signal
    characterizes the signal connection
    and its copies
    , and the ACF of a random signal
    - connection of signal values
    Let us point out the main properties of the PDF:
    at different points in time.

    Difference between functions
    Let us point out the main properties of the PDF:
    is that the function
    represents an inaccurate frequency image of a random signal
    , but an averaged characteristic of the frequency properties of an entire ensemble of different implementations of this signal. This fact, as well as the absence in the energy spectrum
    information about the phases of the harmonic components of a random signal does not allow us to reconstruct the shape of this signal from it.

    From formulas (7.97) and (7.98) it follows that the functions
    Let us point out the main properties of the PDF:
    are related to each other by Fourier transforms, i.e. (see (7.46))

    And
    .

    Therefore, the wider the spectrum of a random signal (the more ), the narrower its ACF and the smaller the correlation interval .

    Similar articles