The frequency probability, also known as frequentist probability, It refers to how likely an event is if a experiment repeats many times. It can be understood as the quotient between the number of favorable cases and the number of possible cases when the number of cases tends to infinity.

The idea frequency probability is used when working with a very high number of repetitions, thus appreciating the long-term trend. It is important to note that the assignment of values ​​is always linked to the analysis of multiple iterations: this is why it is common to use computer simulations.

The usefulness of the frequency probability is often debated by specialists. There are experts who consider that method It is not empirical and that the randomness criteria used are not reliable.

For the calculation of the frequency probability, it is necessary to carry out the programming of the experiment in a system that provides a random iteration. The study of the frequency probability of the phenomenon in question is developed through a value table.

It is considered that, after a large number of repetitions, the values that the experiment throws approach the theoretical values. In this way the data of the frequency probability are taken to draw conclusions.

In short, the frequency probability can be related to the relative frequency. This is what the quotient between the absolute frequency (the number of times the value appears) and the sample size. It is argued that, with multiple repetition of the random experiment, the relative frequency approaches the probability of the event.

Compared to the classical probability, we can point out some differences that are often used to negatively criticize the frequency:

* classic: it is used if the results are probable, that is, if a previous study indicates that beyond the possibility of their taking place there are indications that support their occurrence;
* frequency: is measured based on a estimate to the future or to the experience, but without proof that it really can happen;

* classic: the number of favorable results directly influences the study;
* frequency: No behavior is considered definitive throughout the study, but rather they are interpreted by forcing the perspective that they cause a particular result.

The concept of frequency probability dates from the mid-nineteenth century, although its developing formal took place during the first half of 1900 by the economist Von Mises, a native of Austria, who raised two premises to support the theory:

* statistical regularityAlthough the concrete results behave somewhat chaotically, after subjecting an experiment to a large number of iterations we begin to obtain some patterns of results;

* probability should be considered objective: Von Mises pointed out that it is a concept that can be measured and supported his assertion that phenomena random have some characteristics that make them unique.

Among the specific criticisms that the frequency probability has received as an empirical method of calculating probabilities, we can point out the following two statements:

* the concept of limit cannot be considered real: the formula that is proposed for this concept takes into account that the probability of an event must become stable when repeating the experiment infinitely. This occurs in cases where N tends to infinity, although it goes without saying that in practice infinite repetition is not possible;

* the sequence cannot be really random: the need to stabilize the probability mentioned above makes the real randomness of the succession, since it makes it determined. In addition, the random numbers that we can take advantage of in a computerized experiment are not really spontaneous as they would appear in nature.