P. Terán (2015). Statistics and Probability Letters 96, 185-189.
This note shows that the main results in a former SPL paper by Chareka are wrong. I approached Chareka with a counterexample to his central limit theorem and he didn't take it very well. He said the example was `pathological' and spoke to me quite dismissively saying that he and I were `not even in the same page'. He sent me an eminently reasonable (negative) referee report from SPL as an example of the wrongness I was incurring in. He somehow was able to overturn the report, I can't see how, and have his paper published. He said that I should accept his results since they had been checked by some of the world's leading experts in measure theory (veridical!!!), at which point I felt so deeply offended by the argument of authority that I just cut all communication.
Some time later I found a counterexample to the other main result in his paper, and said: Well, the world needs to know.
The nice thing about this short note is that is shows that ordinary intuitions about probabilities are misleading when studying the more general framework of capacities.
Up the line:
·Laws of large numbers without additivity (2014).
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).
Down the line:
Some related papers have been submitted or are under preparation.
To download, click on the title or here.
Showing posts with label [capacities]. Show all posts
Showing posts with label [capacities]. Show all posts
Saturday, 8 November 2014
Thursday, 24 October 2013
A law of large numbers for the possibilistic mean value
P. Terán (2014). Fuzzy Sets and Systems 245, 116-124.
This paper has an interesting idea, I think.
In general, random variables in a possibility (instead of probability) space do not satisfy a law of large numbers exactly like the one in probability theory. The reason is that, if the variable takes on at least two different values x, y with full possibility, it is fully possible that the sample average equals x for every sample size and also that it equals y. Thus we cannot ensure that the sample average converges to either x or y necessarily.
The most you can say, with Fullér and subsequent researchers, is that the sample average will necessarily tend to being confined within a set of possible limits.
The interesting idea is the following:
1) Define a suitable notion of convergence in distribution.
2) Show that the law of large numbers does hold in the sense that the sample averages converge to a random variable in distribution, even if it cannot converge in general either in necessity or almost surely.
3) Show that, magically, the statement in distribution is stronger than the previous results in the literature, not weaker as one would expect!
---
On a more anecdotal note, this paper was written as a self-challenge. Physicist Edward Witten is said to type (or have typed some) papers by improvisation, making the details up as he goes along. I have always considered myself uncapable of doing something like that, but I've been happy to learn that I was wrong.
Up the line:
·Strong law of large numbers for t-normed arithmetics (2008)
·On convergence in necessity and its laws of large numbers (2008)
Down the line:
There is a couple of papers under submission, and a whole lot of ideas.
To download the paper, click on the title or here
This paper has an interesting idea, I think.
In general, random variables in a possibility (instead of probability) space do not satisfy a law of large numbers exactly like the one in probability theory. The reason is that, if the variable takes on at least two different values x, y with full possibility, it is fully possible that the sample average equals x for every sample size and also that it equals y. Thus we cannot ensure that the sample average converges to either x or y necessarily.
The most you can say, with Fullér and subsequent researchers, is that the sample average will necessarily tend to being confined within a set of possible limits.
The interesting idea is the following:
1) Define a suitable notion of convergence in distribution.
2) Show that the law of large numbers does hold in the sense that the sample averages converge to a random variable in distribution, even if it cannot converge in general either in necessity or almost surely.
3) Show that, magically, the statement in distribution is stronger than the previous results in the literature, not weaker as one would expect!
---
On a more anecdotal note, this paper was written as a self-challenge. Physicist Edward Witten is said to type (or have typed some) papers by improvisation, making the details up as he goes along. I have always considered myself uncapable of doing something like that, but I've been happy to learn that I was wrong.
Up the line:
·Strong law of large numbers for t-normed arithmetics (2008)
·On convergence in necessity and its laws of large numbers (2008)
Down the line:
There is a couple of papers under submission, and a whole lot of ideas.
To download the paper, click on the title or here
Sunday, 14 April 2013
Distributions of random closed sets via containment functionals
P. Terán (2014). Journal of Nonlinear and Convex Analysis 15, 907-917.
A central problem in the theory of random sets is how to characterize the distribution of a random set in a simpler way. The fact that we are dealing with a random element of a space each point of which is a set implies that the distribution is defined on a sigma-algebra which is a set of sets of sets.
The standard road, initiated in the seventies by e.g. Kendall and Matheron (but already travelled in the opposite direction in the fifties by Choquet) is to describe a set by a number of 0-1 characteristics, typically whether it hits (i.e. intersects) or not each element of a family of test sets. This gives us the hitting functional defined on the test sets (a set of sets, one order of magnitude simpler) as the hitting probability of the random set.
The classical assumptions on the underlying space are: locally compact, second countable and Hausdorff (this implies the existence of a separable complete metric). That is enough for applications in Rd but seems insufficiently general to live merrily ever after. In contrast, the theory of probability measures in metric spaces was well developed without local compactness about half a century ago.
Molchanov's book includes three proofs of the Choquet-Kendall-Matheron theorem, and it is fascinating how all three break down in totally different ways if local compactness is dropped.
This paper is an attempt at finding a new path of proof that avoids local compactness. I failed but ended up succeeding in replacing second countability by sigma-compactness, which (under local compactness) is strictly weaker. Sadly, I didn't know how to handle some problems and had to opt for sigma-compactness after believing for some time that I had a correct proof in locally compact Hausdorff spaces.
Regarding the assumption I initially set out to defeat, all I can say for the moment is that now there are four proofs that break down in non-locally-compact spaces.
Up the line:
This starts a new line.
Down the line:
Some work awaits its moment to be typed.
To download, click on the title or here.
A central problem in the theory of random sets is how to characterize the distribution of a random set in a simpler way. The fact that we are dealing with a random element of a space each point of which is a set implies that the distribution is defined on a sigma-algebra which is a set of sets of sets.
The standard road, initiated in the seventies by e.g. Kendall and Matheron (but already travelled in the opposite direction in the fifties by Choquet) is to describe a set by a number of 0-1 characteristics, typically whether it hits (i.e. intersects) or not each element of a family of test sets. This gives us the hitting functional defined on the test sets (a set of sets, one order of magnitude simpler) as the hitting probability of the random set.
The classical assumptions on the underlying space are: locally compact, second countable and Hausdorff (this implies the existence of a separable complete metric). That is enough for applications in Rd but seems insufficiently general to live merrily ever after. In contrast, the theory of probability measures in metric spaces was well developed without local compactness about half a century ago.
Molchanov's book includes three proofs of the Choquet-Kendall-Matheron theorem, and it is fascinating how all three break down in totally different ways if local compactness is dropped.
This paper is an attempt at finding a new path of proof that avoids local compactness. I failed but ended up succeeding in replacing second countability by sigma-compactness, which (under local compactness) is strictly weaker. Sadly, I didn't know how to handle some problems and had to opt for sigma-compactness after believing for some time that I had a correct proof in locally compact Hausdorff spaces.
Regarding the assumption I initially set out to defeat, all I can say for the moment is that now there are four proofs that break down in non-locally-compact spaces.
Up the line:
This starts a new line.
Down the line:
Some work awaits its moment to be typed.
To download, click on the title or here.
Monday, 18 March 2013
Laws of large numbers without additivity
P. Terán (2014). Transactions of the American Mathematical Society 366, 5431-5451.
In this paper, a law of large numbers is presented in which the probability measure is replaced by a set function satisfying weaker properties. Instead of the union-intersection formula of probabilities, only complete monotony (a one-sided variant) is assumed. Further, the continuity property for increasing sequences is assumed to hold for open sets but not for general Borel sets.
Many results of this kind have been published in the last years, with very heterogenous assumptions. This seems to be the first result where no extra assumption is placed on the random variables (beyond integrability, of course).
The paper also presents a number of examples showing that the behaviour of random variables in non-additive probability spaces can be quite different. For example, the sample averages of a sequence of i.i.d. variables with values in [0,2] can
-converge almost surely to 0
-have `probability' 0 of being smaller than 1
-converge in law to a non-additive distribution supported by the whole interval [0,1].
Up the line:
This starts a new line.
Down the line:
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).
To download, click on the title or here.
In this paper, a law of large numbers is presented in which the probability measure is replaced by a set function satisfying weaker properties. Instead of the union-intersection formula of probabilities, only complete monotony (a one-sided variant) is assumed. Further, the continuity property for increasing sequences is assumed to hold for open sets but not for general Borel sets.
Many results of this kind have been published in the last years, with very heterogenous assumptions. This seems to be the first result where no extra assumption is placed on the random variables (beyond integrability, of course).
The paper also presents a number of examples showing that the behaviour of random variables in non-additive probability spaces can be quite different. For example, the sample averages of a sequence of i.i.d. variables with values in [0,2] can
-converge almost surely to 0
-have `probability' 0 of being smaller than 1
-converge in law to a non-additive distribution supported by the whole interval [0,1].
Up the line:
This starts a new line.
Down the line:
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).
To download, click on the title or here.
Non-additive probabilities and the laws of large numbers (in Spanish)
P. Terán (2011).
These are the slides of my plenary lecture at the Young Researchers Congress celebrating the centennial of Spain's Royal Mathematical Society (in Spanish).
You can read a one-page abstract here at the conference website.
Up the line:
·Laws of large numbers without additivity (201x). The slides essentially cover this paper, with context for an audience of non-probabilists.
Down the line:
Some papers have been submitted.
To download, click on the title or here.
These are the slides of my plenary lecture at the Young Researchers Congress celebrating the centennial of Spain's Royal Mathematical Society (in Spanish).
You can read a one-page abstract here at the conference website.
Up the line:
·Laws of large numbers without additivity (201x). The slides essentially cover this paper, with context for an audience of non-probabilists.
Down the line:
Some papers have been submitted.
To download, click on the title or here.
Centrality as a gradual notion: A new bridge between fuzzy sets and statistics
P. Terán (2011). International Journal of Approximate Reasoning 52, 1243-1256.
According to one point of view, fuzzy set theoretical notions are problematic unless they can be justified as / explained from / reduced to ordinary statistics and probability. I can't say that this makes much sense to me.
In this paper the opposite route is taken, which is fun. It subverts that view by writing a similar paper in which statistical/probabilistic notions are reduced to fuzzy ones. The point is: So what?
A fuzzy set of central points of a probability distribution with respect to a family of fuzzy reference events is defined. Its fuzzy set theoretical interpretation is very natural: the membership degree of x equals the truth value of the proposition "Every reference event containing x is probable".
Also natural location estimators are the points whose membership in that fuzzy set is maximal. The paper presents many examples of known notions from statistics and probability arising as maximally central estimators (of a distribution or, more generally, of a family of distributions). The prototype of a maximally central estimator is the mode (taking the singletons as reference events), and MCEs can thus be seen as generalized modes.
From the paper's abstract: "This framework has a natural interpretation in terms of fuzzy logic and unifies many known notions from statistics, including the mean, median and mode, interquantile intervals, the Lorenz curve, the halfspace median, the zonoid and lift zonoid, the coverage function and several expectations and medians of random sets, and the Choquet integral against an infinitely alternating or infinitely monotone capacity."
Up the line:
This starts a new line.
Down the line:
·Connections between statistical depth functions and fuzzy sets (2010).
A long paper on statistical consistency has been submitted.
To download, click on the title or here.
According to one point of view, fuzzy set theoretical notions are problematic unless they can be justified as / explained from / reduced to ordinary statistics and probability. I can't say that this makes much sense to me.
In this paper the opposite route is taken, which is fun. It subverts that view by writing a similar paper in which statistical/probabilistic notions are reduced to fuzzy ones. The point is: So what?
A fuzzy set of central points of a probability distribution with respect to a family of fuzzy reference events is defined. Its fuzzy set theoretical interpretation is very natural: the membership degree of x equals the truth value of the proposition "Every reference event containing x is probable".
Also natural location estimators are the points whose membership in that fuzzy set is maximal. The paper presents many examples of known notions from statistics and probability arising as maximally central estimators (of a distribution or, more generally, of a family of distributions). The prototype of a maximally central estimator is the mode (taking the singletons as reference events), and MCEs can thus be seen as generalized modes.
From the paper's abstract: "This framework has a natural interpretation in terms of fuzzy logic and unifies many known notions from statistics, including the mean, median and mode, interquantile intervals, the Lorenz curve, the halfspace median, the zonoid and lift zonoid, the coverage function and several expectations and medians of random sets, and the Choquet integral against an infinitely alternating or infinitely monotone capacity."
Up the line:
This starts a new line.
Down the line:
·Connections between statistical depth functions and fuzzy sets (2010).
A long paper on statistical consistency has been submitted.
To download, click on the title or here.
Subscribe to:
Posts (Atom)