Monday 18 March 2013

Laws of large numbers without additivity

P. Terán (2014). Transactions of the American Mathematical Society 366, 5431-5451.


In this paper, a law of large numbers is presented in which the probability measure is replaced by a set function satisfying weaker properties. Instead of the union-intersection formula of probabilities, only complete monotony (a one-sided variant) is assumed. Further, the continuity property for increasing sequences is assumed to hold for open sets but not for general Borel sets.

Many results of this kind have been published in the last years, with very heterogenous assumptions. This seems to be the first result where no extra assumption is placed on the random variables (beyond integrability, of course).

The paper also presents a number of examples showing that the behaviour of random variables in non-additive probability spaces can be quite different. For example, the sample averages of a sequence of i.i.d. variables with values in [0,2] can
-converge almost surely to 0
-have `probability' 0 of being smaller than 1
-converge in law to a non-additive distribution supported by the whole interval [0,1].


Up the line:
This starts a new line.

Down the line:
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).



To download, click on the title or here.

Non-additive probabilities and the laws of large numbers (in Spanish)

P. Terán (2011).


These are the slides of my plenary lecture at the Young Researchers Congress celebrating the centennial of Spain's Royal Mathematical Society (in Spanish).

You can read a one-page abstract here at the conference website.


Up the line:
·Laws of large numbers without additivity (201x). The slides essentially cover this paper, with context for an audience of non-probabilists.

Down the line:
Some papers have been submitted.


To download, click on the title or here.

Centrality as a gradual notion: A new bridge between fuzzy sets and statistics

P. Terán (2011). International Journal of Approximate Reasoning 52, 1243-1256.


According to one point of view, fuzzy set theoretical notions are problematic unless they can be justified as / explained from / reduced to ordinary statistics and probability. I can't say that this makes much sense to me.

In this paper the opposite route is taken, which is fun. It subverts that view by writing a similar paper in which statistical/probabilistic notions are reduced to fuzzy ones. The point is: So what?

A fuzzy set of central points of a probability distribution with respect to a family of fuzzy reference events is defined. Its fuzzy set theoretical interpretation is very natural: the membership degree of x equals the truth value of the proposition "Every reference event containing x is probable".

Also natural location estimators are the points whose membership in that fuzzy set is maximal. The paper presents many examples of known notions from statistics and probability arising as maximally central estimators (of a distribution or, more generally, of a family of distributions). The prototype of a maximally central estimator is the mode (taking the singletons as reference events), and MCEs can thus be seen as generalized modes.

From the paper's abstract: "This framework has a natural interpretation in terms of fuzzy logic and unifies many known notions from statistics, including the mean, median and mode, interquantile intervals, the Lorenz curve, the halfspace median, the zonoid and lift zonoid, the coverage function and several expectations and medians of random sets, and the Choquet integral against an infinitely alternating or infinitely monotone capacity."


Up the line:
This starts a new line.

Down the line:
·Connections between statistical depth functions and fuzzy sets (2010).
A long paper on statistical consistency has been submitted.


To download, click on the title or here.