• Types and types of random processes. Characteristics of random processes. The concept of a random function. Stationary random processes

    29.06.2020

    Chapter 1. Basic concepts of the theory of random processes

    Definition of a random process. Basic approaches to the task

    Random processes. The concept of realization and section.

    Elementary random processes.

    A random (stochastic, probabilistic) process is a function of a real variable t, the values ​​of which are the corresponding random variables X(t).

    In the theory of random processes, t is interpreted as time taking values ​​from some subset T of the set real numbers(t T, T R).

    Within the framework of classical mathematical analysis, the function y=f(t) is understood as such a type of dependence of the variables t and y, when a specific numerical value of the argument t corresponds to a single numerical value of the function y. For random processes, the situation is fundamentally different: specifying a specific argument t leads to the appearance of a random variable X(t) with a known distribution law (if it is a discrete random variable) or with a given distribution density (if it is a continuous random variable). In other words, the characteristic under study at each point in time is random in nature with a non-random distribution.

    The values ​​that the ordinary function y=f(t) takes at each moment of time completely determine the structure and properties of this function. For random processes, the situation is completely different: here it is not enough to know the distribution of the random variable X(t) at each value of t; information is needed about the expected changes and their probabilities, that is, information about the degree of dependence of the upcoming value of the random process on its background.

    The most general approach to describing random processes is to specify all of its multidimensional distributions, when the probability of simultaneous occurrence of the following events is determined:

    t 1 , t 2 ,…,t n T, n N: X(t i)x i ; i=1,2,…,n;

    F(t 1 ;t 2 ;…;t n ;x 1 ;x 2 ;…;x n)= P(X(t 1)≤x 1; X(t 2)≤x 2;…; X(t n)≤x n).

    This method of describing random processes is universal, but very cumbersome. To obtain significant results, the most important special cases are identified that allow the use of a more advanced analytical apparatus. In particular, it is convenient to consider random processX(t, ω) as a function of two variables: t T, ω Ω , which for any fixed value t T becomes a random variable defined on the probability space (Ω, AA, P), where Ω is a non-empty set of elementary events ω; AA is the σ-algebra of subsets of the set Ω, that is, the set of events; P is a probability measure defined on AA.

    The non-random numerical function x(t)=X(t, ω 0) is called the realization (trajectory) of the random process X(t, ω).

    The cross section of a random process X(t, ω) is a random variable that corresponds to the value t=t 0 .

    If the argument t takes all real values ​​or all values ​​from some interval T of the real axis, then we talk about a random process with continuous time. If t takes only fixed values, then we talk about a random process with discrete time.

    If the cross section of a random process is a discrete random variable, then such a process is called process with discrete states. If any section is a continuous random variable, then the random process is called process with continuous states.

    In the general case, it is analytically impossible to define a random process. The exception is the so-called elementary random processes, the form of which is known, and random variables are included as parameters:

    X(t)=Х(t,A 1,…,A n), where A i, i=1,…,n are arbitrary random variables with a specific distribution.

    Example 1 . We consider a random process X(t)=A·e - t, where A is a uniformly distributed discrete random variable taking values ​​(-1;0;1); t≥0. Draw all its implementations of the random process X(t) and show sections at times t 0 =0; t 1 =1; t 2 =2.

    Solution.

    This random process is a process with continuous time and discrete states. At t=0, the cross section of the random process X(t) is a discrete random variable A(-1;0;1), uniformly distributed.

    At t=0, the cross section of the random process X(t) is a discrete random variable A(-1;0;1), uniformly distributed.

    At t=1, the cross section of the random process X(t) is a discrete random variable (-1/е;0;1/е), distributed uniformly.

    At t=2, the cross section of the random process X(t) is a discrete random variable (-1/е 2 ;0;1/е 2 ), uniformly distributed.

    Example 2 . We consider a random process X(t)=sin At, where A is a discrete random variable taking values ​​(0;1;2); the argument t takes discrete values ​​(0; π/4; π/2; π ). Graphically depict all realizations and sections of this random process.

    Solution.

    This random process is a process with discrete time and discrete states.

    Processes

    Function of the form

    Function of the form

    Solution.

    Mathematical expectation: m Y (t)=M(Xe - t)=e - t m X =me - t .

    Dispersion: D Y (t)=D(Xe - t)=e -2 t DX=σ 2 e -2 t .

    Standard deviation:

    Correlation function: K Y (t 1 ; t 2)=M((X e - t 1 -m e - t 1)×(X e - t 2 -m e - t 2))=

    E -(t 1+ t 2) M(X-m) 2 =σ 2 e -(t 1+ t 2) .

    Normalized correlation function:

    According to the conditions of the problem, the random variable X is normally distributed; for a fixed value of t, the cross section Y(t) linearly depends on the random variable X, and by the property of normal distribution, the cross section Y(t) is also normally distributed with a one-dimensional distribution density:

    Example 4. Find the main characteristics of the random process Y(t)=W×e - Ut (t>0), where W and U are independent random variables; U is distributed uniformly on the segment; W has a mathematical expectation m W and a standard deviation σ W .

    Solution.

    Mathematical expectation: m Y (t)=M(We - Ut)=MW×M(e - Ut)=m w ×*M(e - Ut);

    , (t>0).

    Correlation function:

    because

    Dispersion:

    Example 5. Find the one-dimensional distribution law of the random process: Y(t)=Vcos(Ψt-U), where V and U are independent random variables; V is normally distributed with parameters (m V ; σ V); Ψ-const; U- is uniformly distributed on the segment.

    Solution.

    Mathematical expectation of the random process Y(t):

    Dispersion:

    Standard deviation:

    We proceed to the derivation of the one-dimensional distribution law. Let a t-fixed moment in time, and a random variable U take a fixed value U=u - const; u , then we obtain the following conditional characteristics of the random process Y(t):

    M(Y(t)| U=u)=m V ×cos(Ψt-u);

    D(Y(t)| U=u)= ×cos 2 (Ψt-u);

    σ(Y(t)| U=u)= ×|cos(Ψt-u)|.

    Since the random variable V is normally distributed and for a given value of the random variable U=u all sections are linearly dependent, the conditional distribution in each section is normal and has the following density:

    Unconditional one-dimensional density of the random process Y(t):

    Obviously, this distribution is no longer normal.

    Convergence and continuity

    Convergence in probability.

    They say that a sequence of random variables (X n) converges in probabilities to the random variable X for n®¥, if

    Designation:

    Please note that for n®¥ there is a classical convergence of probability to 1, that is, as the number n increases, probability values ​​can be guaranteed to be arbitrarily close to 1. But at the same time, it is impossible to guarantee that the values ​​of the random variables Xn are close to the values ​​of the random variable X for any arbitrarily large values ​​of n, since we are dealing with random variables.

    stochastically continuous V point t 0 T, if

    3. Convergence on average to the power of p³1.

    The sequence of random variables (X n) is said to be converges in on average to a degree 1 to a random variable X if

    Designation: X n X.

    In particular, (X n ) converges in rms to a random variable X if

    Designation:

    The random process X(t), t T is called continuous in mean square at point t 0 T, if

    4. Convergence is almost certain (convergence with probability one).

    They say that a sequence of random variables (X n) almost certainly converges to a random variable X if

    where ωÎW is an elementary event of the probability space (W, AA, P).

    Designation: .

    Weak convergence.

    They say that the sequence ( F Xn (x)) of distribution functions of random variables X n weakly converges to the distribution function F X (x) of a random variable X, if there is pointwise convergence at each point of continuity of the function F X (x).

    Notation: F Xn (x)Þ F X (x).

    Solution.

    1) The mathematical expectation, dispersion, standard deviation, correlation function and normalized correlation function of the random process X(t) have the form (see. Example 3):

    2) Let’s move on to calculating the characteristics of the random process X ’ (t). In accordance with Theorems 1-3 we get:

    With the exception of the mathematical expectation (which changed sign), all other characteristics were completely preserved. The cross-correlation functions of the random process X(t) and its derivative X ’ (t) have the form:

    3) According to Theorems 41-64 the main characteristics of the integral of the random process X(t) have the following values:

    D (t1;t2)=?????????????

    Cross correlation functions of the random process X(t) and its integral Y(t):

    Expression of the form

    ,

    where φ ik (t), k=1;2;… are non-random functions; Vi , k=1;2;…-uncorrelated centered random variables are called the canonical expansion of the random process X(t), while the random variables Vi are called the coefficients of the canonical expansion; and non-random functions φ ki (t) - coordinate functions of the canonical expansion.

    Let's consider the characteristics of a random process

    Since by condition That

    Obviously, the same random process has different kinds canonical expansion depending on the choice of coordinate functions. Moreover, even with the choice of coordinate functions, there is arbitrariness in the distribution of random variables V k. In practice, based on the results of experiments, estimates are obtained for the mathematical expectation and the correlation function: . After expansion into a double Fourier series in coordinate functions φ to (t):

    obtain the values ​​of variances D Vk of random variables V k .

    Example 7. The random process X(t) has the following canonical expansion: , where V k are normally distributed uncorrelated random variables with parameters (0; σ k); m 0 (t) is a non-random function. Find the main characteristics of the random process X(t), including distribution densities.

    Solution.

    From the general formulas obtained earlier we have:

    In each section, the random process X(t) has a normal distribution, since it is a linear combination of uncorrelated normally distributed random variables V k , and the one-dimensional distribution density has the form:

    The two-dimensional distribution law is also normal and has the following two-dimensional distribution density:

    Example 8. The mathematical expectation m X (t) and the correlation function K X (t 1 ;t 2)=t 1 t 2 of the random process X(t) are known, where . Find the canonical expansion of X(t) in coordinate functions, provided that the expansion coefficients V k are normally distributed random variables.

    Solution.

    The correlation function has the following expansion

    hence,

    ;

    ;

    Because ,

    That ; .

    Distribution density of random variables V k:

    The canonical expansion of the random process X(t) has the form:

    .

    Narrow and broad senses.

    A significant number of events occurring in nature, in particular those associated with the operation of technical devices, are of an “almost” steady-state nature, that is, the pattern of such events, subject to minor random fluctuations, nevertheless, in general, is preserved over time. In these cases, it is customary to talk about stationary random processes.

    For example, a pilot maintains a given flight altitude, but various external factors(gusts of wind, rising currents, changes in engine thrust, etc.) lead to the fact that the flight altitude fluctuates around a given value. Another example would be the trajectory of a pendulum. If it were left to its own devices, then, provided there were no systematic factors leading to the damping of oscillations, the pendulum would be in the mode of steady oscillations. But various external factors (gusts of wind, random fluctuations of the suspension point, etc.), without generally changing the parameters of the oscillatory mode, nevertheless make the characteristics of the movement not deterministic, but random.

    A random process is called stationary (uniform in time), statistical characteristics which do not change over time, that is. are invariant under time and shifts.

    There are random processes and stationary processes in the broad and narrow sense.

    Such that

    Condition is met

    F(t 1 ; t 2 ;… ;t n ; x 1 ; x 2 ;…; x n)=F(t 1 +τ; t 2 +τ;… ;t n +τ; x 1 ; x 2 ;…; x n ),

    and, therefore, all n-dimensional distributions do not depend on the moments of time t 1; t 2 ;… ;t n , and from n-1 duration of time intervals τ i ;:

    In particular, the one-dimensional distribution density does not depend on time t at all:

    two-dimensional density of sections at times t 1 and t 2

    n-dimensional density of sections at times t 1 ; t 2 ...; tn:

    A random process SP Xx(t) is called stationary in the broad sense if its moments of the first and second order are invariant with respect to the time shift, that is, its mathematical expectation does not depend on time t and is a constant, and the correlation function depends only on the length of the time interval between sections:

    It is obvious that the stationary random process BSC in the narrow sense is a stationary random process BSC in the broad sense; the converse statement is not true.

    ProcessBSC

    2. 3. The correlation function of the stationary random process SSP is even:

    Since it has the following symmetry

    4. The variance of the stationary random process SSP is a constant equal to

    the value of its correlation function at the point:

    6. The correlation function of the stationary random process SSP is

    positive definite, that is

    Normalized correlation function of a stationary random process SSP is also even, positive definite and at the same time

    Example 11. Find the characteristics and draw a conclusion about the type of random process SP Xx(t):

    rWhere U 1 and b U 2 are uncorrelated random variables;

    Solution.

    Consequently, the random process X(t) is stationary in the broad sense. As follows from Example 10..., if U 1 and U 2 are independent, centering and normally distributed random variables SV, then the random process SP is also stationary in the broad sense.

    Example 12. Prove stationarity in the broad sense that the random process SP Xx(t) is stationary in the broad sense:

    where V and independent random variables SV; MV=m vV - const; - random variable SV distributed normally on the segment;

    Solution.

    Let's write Xx(t) as follows:

    Since the random variable is uniformly distributed on the segment , the distribution density has the form:

    hence,

    We get

    Since the random process SP Xx(t) has constant mathematical expectation and variance, and the correlation function is a function, then, regardless of the distribution law of the random variable SV V M, the random process SP X x(t) is stationary in the broad sense.

    Stationary connected joint ventures

    Random processes X(t)X(t) and Y(t)Y(t) are called stationary coupled if their mutual correlation function depends only on the difference in arguments τ =t 2 -t 1:

    R x XY y (t 1 ;t 2)=r x XY y (τ).

    Stationarity of the random processes SP X(t) X(t) and Y(t) Y(t) does not mean their stationary connection.

    Let us note the main properties of stationary related random processes SP, the derivative and integral of stationary random processes SP,

    1) 1) rR x XYy (τ)=rR y YXx (-τ).

    2) 2)

    3) 3)

    Where

    5) 5) Where

    6) 6) ;

    Example 13. Correlation function of a stationary random process SSP X(t) X(t) looks like

    Find correlation functions, variances, cross correlation functions of random processes SP X(t), X’(t), .

    Solution.

    Let us limit our analysis of the case to the values D x X (t)=1.

    Let's use the following relation:

    We get:

    Please note that as a result, upon differentiation, the stationary random process BRP X(t) turns into a stationary random process BRP X’(t), while X(t) and X’(t) are stationary related. When integrating a stationary random process SP X(t), a nonstationary random process SP Y(t) arises, and at the same time X(t) and Y(t) are not stationary related.

    And their characteristics

    Among the stationary random processes of SSP there is a special class of processes called ergodic , which have the following properties: their The characteristics obtained by averaging the set of all implementations coincide with the corresponding characteristics obtained by averaging over time one implementation observed on the interval (0, T) of a sufficiently long duration. That is, over a sufficiently large time period any implementation goes through any state regardless of what the initial state of the system was at t=0; and in this sense, any realization fully represents the totality of realizations.

    In practice, there are such random variables that during one experiment continuously change depending on time or some other arguments. For example, the error in tracking an aircraft by radar does not remain constant, but changes continuously over time. At each moment it is random, but its meaning at different moments in time when escorting one aircraft is different. Other examples are: lead angle when continuously aiming at a moving target; radio range finder error when continuously measuring a changing range; deviation of the trajectory of a guided projectile from the theoretical one during control or homing; fluctuation (shot and thermal) noise in radio devices, and so on. Such random variables are called random functions. A characteristic feature of such functions is that it is not possible to accurately indicate their type before the experiment. A random function and a random variable are related to each other in the same way as a function and a constant variable considered in mathematical analysis.

    Definition 1. A random function is a function that, for each outcome of an experiment, associates a certain numerical function, that is, a mapping of space Ω into a certain set of functions (Figure 1).

    Definition 2. A random function is a function that, as a result of experiment, can take one or another specific form, it is not known in advance which one.


    The specific form taken by a random function as a result of experiment is called implementation random function.

    Due to the unpredictability of behavior, depict the random function in general view on the graph is not possible. You can only write down its specific form - that is, its implementation obtained as a result of the experiment. Random functions, like random variables, are usually denoted in capital letters of the Latin alphabet X(t), Y(t), Z(t), and their possible implementations – respectively x(t), y(t), z(t). Random Function Argument t in the general case, it can be an arbitrary (not random) independent variable or a set of independent variables.

    The random function is called random process , if the argument of the random function is time. If the argument of a random function is discrete, then it is called random sequence. For example, a sequence of random variables is a random function of an integer argument. Figure 2 shows implementations of a random function as an example X(t): x1(t), x2(t), … , xn(t), which are continuous functions of time. Such functions are used, for example, for the macroscopic description of fluctuation noise.

    Random functions occur in any case when we are dealing with a continuously operating system (measurement, control, guidance, regulation system); when analyzing the accuracy of the system, we have to take into account the presence of random influences (fields); air temperature in different layers of the atmosphere is considered as a random function of height H; position of the rocket's center of mass (its vertical coordinate z in the shooting plane) is a random function of its horizontal coordinate x. This position in each experiment (start-up) with the same pickup data is always somewhat different and differs from the theoretically calculated one.

    Consider some random function X(t). Let's assume that n independent experiments were carried out on it, as a result of which n realizations were obtained (Figure 3) x1(t), x2(t), … , xn(t). Each implementation, obviously, is an ordinary (non-random) function. Thus, as a result of each experiment, the random function X(t) turns into normal non-random function.

    Let's fix some value of the argument t. Let's spend it at a distance

    t = t0 straight line parallel to the ordinate axis (Figure 3). This straight line will intersect the realizations at some points.

    Definition. The set of intersection points of realizations of a random function with a straight line t = t0 is called the cross section of a random function.

    Obviously, section represents some random variable , the possible values ​​of which are the ordinates of the points of intersection of the line t = t0 with implementations xi(t) (i= ).

    Thus, a random function combines the features of a random variable and a function. If you fix the value of the argument, it turns into an ordinary random variable; as a result of each experiment, it turns into an ordinary (non-random) function.

    For example, if you draw two sections t = t1 And t = t2, then we get two random variables X(t1) And X(t2), which together form a system of two random variables.

    2 Laws of distribution

    A random function of a continuously changing argument on any arbitrarily small interval of its change is equivalent to an infinite, uncountable set of random variables that cannot even be renumbered. Therefore, for a random function it is impossible to determine the distribution law in the usual way, as for ordinary random variables and random vectors. To study random functions, an approach is used based on fixing one or more argument values t and the study of the resulting random variables, that is, random functions are studied in separate sections corresponding to different values ​​of the argument t.


    Fixing one value t1 argument t, consider the random variable X1= X(t1). For this random variable, the distribution law can be determined in the usual way, for example, the distribution function F1(x1, t1), probability density f1(x1, t1). These laws are called one-dimensional laws of distribution of a random function X ( t ). Their peculiarity is that they depend not only on the possible value x1 random function X(t) at t = t1, but also on how the value is chosen t1 argument t, that is, the laws of distribution of a random variable X1= X(t1) depend on the argument t1 as from a parameter.

    Definition. Function F1(x1, t1) = P(X(t1)< x1) is called a one-dimensional probability distribution function of a random function, or

    F1(x, t) = P(X(t)< x) . (1)

    Definition. If the distribution function F1(x1, t1) = P(X(t1)< x1) differentiable with respect to x1 then this derivative is called a one-dimensional probability distribution density (Figure 4), or

    . (2)

    The one-dimensional distribution density of a random function has the same properties as the distribution density of a random variable. In particular: 1) f1 (x, t) 0 ;

    2) https://pandia.ru/text/78/405/images/image009_73.gif" width="449" height="242">

    One-dimensional distribution laws do not completely describe a random function, since they do not take into account the dependencies between the values ​​of a random function at different times.

    Since for a fixed argument value t random function turns into an ordinary random variable, then when fixing n we get a set of argument values n random variables X(t1), X(t2), …, X(tn), that is, a system of random variables. Therefore, specifying a one-dimensional distribution density f1(x, t) random function X(t) for an arbitrary argument value t similar to specifying the densities of individual quantities included in the system. Full description systems of random variables is the joint law of their distribution. Therefore, a more complete characterization of the random function X(t) is the n-dimensional distribution density of the system, that is, the function fn(x1, x2, … , xn, t1, t2, … , tn).

    In practice, finding n- The dimensional law of distribution of a random function usually causes great difficulties, therefore they are usually limited to the two-dimensional distribution law, which characterizes the probabilistic relationship between pairs of values X ( t1 ) And X ( t2 ).

    Definition. Two-dimensional distribution density of a random function X(t) is called the joint distribution density of its values X(t1) And X(t2) for two arbitrary values t1 And t2 argument t.

    f2(x1, x2, t1, t2)= (3)

    https://pandia.ru/text/78/405/images/image012_54.gif" width="227" height="49">. (5)

    The normalization condition for the two-dimensional distribution density has the form

    . (6)

    3 Characteristics of a random process:

    mathematical expectation and variance

    When solving practical problems, in most cases, obtaining and using multidimensional densities to describe a random function involves cumbersome mathematical transformations. In this regard, when studying a random function, the simplest probabilistic characteristics, similar to the numerical characteristics of random variables (mathematical expectation, dispersion), are most often used and rules for operating with these characteristics are established.

    In contrast to the numerical characteristics of random variables, which are constant numbers , the characteristics of the random function are non-random functions his arguments.

    Consider the random function X(t) at fixed t. In the cross section we have an ordinary random variable. Obviously, in the general case the mathematical expectation depends on t, that is, it represents some function t:

    . (7)

    Definition. Mathematical expectation of a random function X(t) called a non-random function https://pandia.ru/text/78/405/images/image016_47.gif" width="383" height="219">

    To calculate the mathematical expectation of a random function, it is enough to know its one-dimensional distribution density

    Mathematical expectation is also called non-random component random function X(t), while the difference

    (9)

    called fluctuation part random function or centered random function.

    Definition. Variance of a random function X(t) is called a non-random function whose value for each t is equal to the dispersion of the corresponding section of the random function.

    From the definition it follows that

    Variance of a random function for each characterizes the spread of possible realizations of a random function relative to the average, in other words, the “degree of randomness” of a random function (Figure 6).

    target coordinates, measured by radar; aircraft angle of attack; load in the electrical circuit.

    5. Types of random processes.

    In mathematics there is the concept of a random function.

    Random function- a function that, as a result of experience, takes on one or another specific form, and which one is not known in advance. The argument of such a function is not accidental. If the argument is time, then such a function is called random process. Examples of random processes:

    The peculiarity of a random function (process) is that for a fixed value of the argument (t), the random function is a random variable, i.e. at t = t i Х (t) = X (t i) – random variable.

    Rice. 2.1. Graphical representation of a random function

    The values ​​of a random function for a fixed argument are called its cross section. Because a random function can have an infinite number of sections, and in each section it represents a random variable, then the random function can be considered as infinite-dimensional random vector.

    The theory of random functions is often called theory of random (stochastic)

    processes.

    For each section of a random process, you can specify m x (t i), D x (t i), x (t i) and, in the general case, x (t i).

    In addition to random functions of time, random functions of the coordinates of a point in space are sometimes used. These functions associate each point in space with a certain random variable.

    The theory of random functions of coordinates of a point in space is called random field theory. Example: wind speed vector in a turbulent atmosphere.

    Depending on the type of function and type of argument, 4 types of random processes are distinguished.

    Table 2.1 Types of random processes

    puddle size (continuous value)

    In addition, there are:

    1. Stationary random processprobabilistic characteristics which does not depend on time, i.e. x (x 1, t 1) = x (x 2, t 2) = ... x (x n, t n) = const.

    2. Normal random process (Gaussian)– joint probability density of sections t 1 … t n – normal.

    3. Markov random process(process without consequences) the state at each moment of time of which depends only on the state at the previous moment and does not depend on previous states. A Markov goal is a sequence of sections of a Markov random process.

    4. Random process type white noise - at each moment the state does not depend on the previous one.

    There are other random processes

    Before defining a random process, let us recall the basic concepts from the theory of random variables. As you know, a random variable is a quantity that, as a result of experiment, can take on one or another value that is unknown in advance. There are discrete and continuous random variables. The main characteristic of a random variable is the distribution law, which can be specified in the form of a graph or in analytical form. With the integral distribution law, the distribution function is , where is the probability that the current value of the random variable is less than a certain value. With the differential distribution law, probability density is used. The numerical characteristics of random variables are the so-called moments, of which the most common are the moment of the first order - the average value (mathematical expectation) of the random variable and the central moment of the second order - dispersion. If there are several random variables (a system of random variables), the concept of a correlation moment is introduced.

    A generalization of the concept of a random variable is the concept random function, i.e. a function that, as a result of experience, can take on one form or another, unknown in advance. If the function argument is time t, then they call her random or stochastic process.

    A specific type of random process obtained as a result of experiment is called implementation random process and is an ordinary non-random (deterministic) function. On the other hand, at a fixed moment in time we have the so-called cross section of a random process in the form of a random variable.

    To describe random processes, the concepts of the theory of random variables are naturally generalized. For some fixed moment in time, the random process turns into a random variable, for which we can introduce a function called one-dimensional distribution law random process. The one-dimensional distribution law is not an exhaustive characteristic of a random process. For example, it does not characterize the correlation (connection) between individual sections of a random process. If we take two different moments of time and , we can introduce a two-dimensional distribution law, etc. Within the limits of our further consideration we will be limited mainly to one-dimensional and two-dimensional laws.

    Let us consider the simplest characteristics of a random process, similar to the numerical characteristics of a random variable. Expected value or average of the set

    and variance

    The mathematical expectation is a certain average curve around which individual realizations of a random process are grouped, and the dispersion characterizes the spread of possible realizations at each moment of time. Sometimes, standard deviation is used.

    For characteristics internal structure the concept of a random process is introduced correlation (autocorrelation) functions

    Along with the mathematical expectation (average over the set) (3.1), another characteristic of the random process is introduced - average value random process for an individual implementation (time average)

    For two random processes, we can also introduce the concept of a cross-correlation function by analogy with (3.3).

    One of the special cases of a random process that is widely used in practice is stationary random process is a random process with probabilistic characteristics that do not depend on time. So, for a stationary random process , , and the correlation function depends on the difference , i.e. is a function of one argument.

    A stationary random process is to some extent similar to ordinary or steady processes in control systems.

    Stationary random processes have interesting property which is called ergodic hypothesis. For a stationary random process, any average over the set is equal to the average over time. In particular, for example, this property often makes it possible to simplify the physical and mathematical modeling of systems under random influences.

    As is known, when analyzing deterministic signals, their spectral characteristics based on the Fourier series or integral are widely used. A similar concept can be introduced for random stationary processes. The difference will be that for a random process the amplitudes of the harmonic components will be random, and the spectrum of a static random process will describe the distribution of variances over various frequencies.

    Spectral Density stationary random process is related to its correlation function by Fourier transforms:

    where the correlation function will be treated as the original, and - as the image.

    There are tables linking originals and images. For example, if , then .

    Let us note the connection between the spectral density and the correlation function with the dispersion D

    1.1.1. Gaussian random processes

    Gaussian , if all its finite-dimensional distributions are normal, that is

    t 1 ,t 2 ,…,t n T

    random vector

    (X(t 1);X(t 2);…;X(t n))

    has the following distribution density:

    ,

    where a i =MX(t i); =M(X(t i)-a i) 2 ; with ij =M((X(t i)-a i)(X(t j)-a j));
    ;

    -algebraic complement of the element with ij.

    1.1.2. Random processes with independent increments

    with independent increments , if its increments on non-overlapping time intervals do not depend on each other:

    t 1 ,t 2 ,…,t n T:t 1 ≤t 2 ≤…≤t n ,

    random variables

    X(t 2)-X(t 1); X(t 3)-X(t 2); ...; X(t n)-X(t n-1)

    independent.

    1.1.3. Random processes with uncorrelated increments

    The random process X(t) is called the process with uncorrelated increments, if the following conditions are met:

    1) t T: MX 2 (t)< ∞;

    2) t 1 ,t 2 ,t 3 ,t 4 T:t 1 ≤t 2 ≤t 3 ≤t 4: M((X(t 2)-X(t 1))(X(t 4)-X(t 3)))=0.

    1.1.4. Stationary random processes (see Chapter 5)

    1.1.5. Markov random processes

    Let us limit ourselves to the definition Markovsky random process with discrete states and discrete time (Markov chain).

    Let system A be in one of the incompatible states A 1 ; A 2 ;…;A n , and at the same time the probability P ij ( s ) what's in s -th test, the system goes from the state to state A j , does not depend on the state of the system in the tests preceding s -1st. Random process of this type called a Markov chain.

    1.1.6. Poisson random processes

    The random process X(t) is called Poisson process with parameter a (a>0), if it has the following properties:

    1)t T; T=)

    Similar articles