The term **covariance** It is not part of the dictionary produced by the **Royal Spanish Academy** (**RAE**). The concept, however, is used in the field of **statistics** and in that of the **probability** to name the **value that reflects the degree of joint variation that is registered in two random variables** taking as a measure their **tights**.

The covariance, therefore, allows us to discover whether **variables** maintain a **dependency link**. The data also contributes to knowing other parameters.

It is known by the name of **random variable** to a function than to the result of a *random experiment* assigns you a **value**, usually of type numeric. A *random experiment*On the other hand, it is the one that can give different results even if it is carried out more than once under the same conditions, so that each experience becomes impossible to predict and, therefore, to reproduce.

A very common example of **experiment** random, which we can try in our daily lives, is the throwing of a die: even if it is thrown on the same surface, with the same hand or cup, and applying more or less the same force and direction, it is not possible to predict which of its faces will be pointing up.

If the low values of one variable correspond to the low values of another variable, or if the same occurs with the high values of both, the covariance has a **positive value** and is rated as **direct**. On the other hand, if the low values of one variable correspond to the highest values of another variable and vice versa, the covariance is **negative** and is defined as **reverse**. The **trend** existing in the linear relationship that is established between the variables, in this way, is expressed through the **covariance sign**.

There are different **formulas** to calculate the covariance. It can be said that the covariance is the **arithmetic average** that arises from the product of the deviations of the variables with respect to their own means.

Suppose that the variables are the results of the evaluations of **History** and **Geography** of five students:

*History grades (P) of the five students: 6, 5, 7, 7, 4 (total = 29)Geography grades (S) of the five students: 7, 3, 4, 3, 5 (total = 22)*

Then you have to tabulate, multiplying the results of the evaluations of each student:

*P x S: 42 (since 6 x 7 = 42), 15 (5 x 3), 28 (7 x 4), 21 (7 x 3), 20 (4 x 5). Total sum of results = 126)*

The mean of P: 29/5 = 5.8

The mean of S: 22/5 = 4.4

Finally:

*PS Covariance: (126/5) – 5.8 x 4.4PS covariance: 25.2 – 5.8 x 4.4PS Covariance: 25.2 – 25.52PS covariance: -0.32*

In addition to knowing if two given random variables have a link of **dependence** mutual, the covariance is used to estimate parameters such as the *regression line* and the *linear correlation coefficient*.

The **regression line** also known as *linear fit* or *linear regression*, and it is a concept belonging to the field of **statistics** comprising a mathematical model used to approximate the dependency that exists between a group of variables and a random term.

The **linear correlation coefficient**, on the other hand, is an indicator of the direction and strength that a *linear relationship* (in mathematics, what is given if the value of one magnitude depends on the value of another) and a *proportionality* (a constant ratio or relationship that occurs between quantities that can be measured) between two *statistical variables* (They are characteristics that can fluctuate, with values that can be observed and measured).

It is important to differentiate the following two types of covariance: the one that occurs between two random variables, which is considered a property of the joint distribution, that is, of the events of both that occur simultaneously; the sample, which is used as a statistical estimate of the **parameter**.