P. Terán (2015). Statistics and Probability Letters 96, 185-189.
This note shows that the main results in a former SPL paper by Chareka are wrong. I approached Chareka with a counterexample to his central limit theorem and he didn't take it very well. He said the example was `pathological' and spoke to me quite dismissively saying that he and I were `not even in the same page'. He sent me an eminently reasonable (negative) referee report from SPL as an example of the wrongness I was incurring in. He somehow was able to overturn the report, I can't see how, and have his paper published. He said that I should accept his results since they had been checked by some of the world's leading experts in measure theory (veridical!!!), at which point I felt so deeply offended by the argument of authority that I just cut all communication.
Some time later I found a counterexample to the other main result in his paper, and said: Well, the world needs to know.
The nice thing about this short note is that is shows that ordinary intuitions about probabilities are misleading when studying the more general framework of capacities.
Up the line:
·Laws of large numbers without additivity (2014).
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).
Down the line:
Some related papers have been submitted or are under preparation.
To download, click on the title or here.
Pedro Terán's research work
Still totally `Under construction'
Saturday 8 November 2014
Monday 24 February 2014
Strong consistency and rates of convergence for a random estimator a fuzzy set
P. Terán, M. López-Díaz (2014). Computational Statistics and Data Analysis 77, 130-145.
This paper is actually a continuation of work in my doctoral dissertation, which is why it is joint work with my Ph.D. advisor. The paper was in preparation when I moved to Zaragoza and, with the distance, it was never completed. Fast forward some years, we went back to working on it, appending to the theoretical results some interesting simulations and, eventually, an example with real data.
It is about approximating an unknown fuzzy set from the information in random samples taken from (again randomly sampled) alpha-cuts of the fuzzy set. Through the connection between fuzzy sets and nested random sets, that can also be recast as a problem of estimating a set conditionally on the value of another variable, when the set depends monotonically on the value of that variable.
We give rates of convergence for the approximants as a function of both sample sizes, in several metrics between fuzzy sets. Simulations suggest that sample sizes of 20-30 may be enough for the rate to be reliable. We present an example with breast cancer data, studying the range of the variable `cell size' as a function of `shape compactness', a measure of cell irregularity.
Up the line:
·P. Terán, M. López-Díaz (2004). A random approximation of set valued càdlàg functions. J. Math. Anal. Appl. 298, 352-362. (You can download it for free at the journal's site.)
Down the line:
Nothing so far. One referee wanted us to study more metrics, another different approximation schemes.
To download the paper, click on the title or here
This paper is actually a continuation of work in my doctoral dissertation, which is why it is joint work with my Ph.D. advisor. The paper was in preparation when I moved to Zaragoza and, with the distance, it was never completed. Fast forward some years, we went back to working on it, appending to the theoretical results some interesting simulations and, eventually, an example with real data.
It is about approximating an unknown fuzzy set from the information in random samples taken from (again randomly sampled) alpha-cuts of the fuzzy set. Through the connection between fuzzy sets and nested random sets, that can also be recast as a problem of estimating a set conditionally on the value of another variable, when the set depends monotonically on the value of that variable.
We give rates of convergence for the approximants as a function of both sample sizes, in several metrics between fuzzy sets. Simulations suggest that sample sizes of 20-30 may be enough for the rate to be reliable. We present an example with breast cancer data, studying the range of the variable `cell size' as a function of `shape compactness', a measure of cell irregularity.
Up the line:
·P. Terán, M. López-Díaz (2004). A random approximation of set valued càdlàg functions. J. Math. Anal. Appl. 298, 352-362. (You can download it for free at the journal's site.)
Down the line:
Nothing so far. One referee wanted us to study more metrics, another different approximation schemes.
To download the paper, click on the title or here
Thursday 24 October 2013
A law of large numbers for the possibilistic mean value
P. Terán (2014). Fuzzy Sets and Systems 245, 116-124.
This paper has an interesting idea, I think.
In general, random variables in a possibility (instead of probability) space do not satisfy a law of large numbers exactly like the one in probability theory. The reason is that, if the variable takes on at least two different values x, y with full possibility, it is fully possible that the sample average equals x for every sample size and also that it equals y. Thus we cannot ensure that the sample average converges to either x or y necessarily.
The most you can say, with Fullér and subsequent researchers, is that the sample average will necessarily tend to being confined within a set of possible limits.
The interesting idea is the following:
1) Define a suitable notion of convergence in distribution.
2) Show that the law of large numbers does hold in the sense that the sample averages converge to a random variable in distribution, even if it cannot converge in general either in necessity or almost surely.
3) Show that, magically, the statement in distribution is stronger than the previous results in the literature, not weaker as one would expect!
---
On a more anecdotal note, this paper was written as a self-challenge. Physicist Edward Witten is said to type (or have typed some) papers by improvisation, making the details up as he goes along. I have always considered myself uncapable of doing something like that, but I've been happy to learn that I was wrong.
Up the line:
·Strong law of large numbers for t-normed arithmetics (2008)
·On convergence in necessity and its laws of large numbers (2008)
Down the line:
There is a couple of papers under submission, and a whole lot of ideas.
To download the paper, click on the title or here
This paper has an interesting idea, I think.
In general, random variables in a possibility (instead of probability) space do not satisfy a law of large numbers exactly like the one in probability theory. The reason is that, if the variable takes on at least two different values x, y with full possibility, it is fully possible that the sample average equals x for every sample size and also that it equals y. Thus we cannot ensure that the sample average converges to either x or y necessarily.
The most you can say, with Fullér and subsequent researchers, is that the sample average will necessarily tend to being confined within a set of possible limits.
The interesting idea is the following:
1) Define a suitable notion of convergence in distribution.
2) Show that the law of large numbers does hold in the sense that the sample averages converge to a random variable in distribution, even if it cannot converge in general either in necessity or almost surely.
3) Show that, magically, the statement in distribution is stronger than the previous results in the literature, not weaker as one would expect!
---
On a more anecdotal note, this paper was written as a self-challenge. Physicist Edward Witten is said to type (or have typed some) papers by improvisation, making the details up as he goes along. I have always considered myself uncapable of doing something like that, but I've been happy to learn that I was wrong.
Up the line:
·Strong law of large numbers for t-normed arithmetics (2008)
·On convergence in necessity and its laws of large numbers (2008)
Down the line:
There is a couple of papers under submission, and a whole lot of ideas.
To download the paper, click on the title or here
Expectations of random sets in Banach spaces
P. Terán (2014). Journal of Convex Analysis 21(4), to appear.
A random set is a random element of a space of sets. Since the latter are not linear (in general, you cannot subtract a set from another), it is not possible to define a notion of expectation for random sets with all the properties of the usual expectation of random variables or vectors. Thus there exist many definitions of the expectation, with various properties.
Two very interesting definitions were proposed by Aumann and Herer. Aumann's expectation is defined by putting together the expectations of all selections of the random set (i.e. if a random variable/vector is taken by selecting one point of the random set, its expectation should be an element of the expectation of the random set). Thus it is analytical in that it relies on calculating integrals in the underlying linear space.
Herer's expectation is not analytical but geometrical, as it only uses the metric structure of the space. It is defined as the locus of all points which are closer to each point x than x is, in average, to the farthest point of the random set. In other words, call R(x) the radius of a ball centered in x that covers the random set, then the Herer expectation is the intersection of all balls with center any x and radius the expected value of R(x).
The aim of the paper is to study the relationships between those two notions. Since the Herer expectation is an intersection of balls, the simplest possibility is when it is either equal to Aumann's, or the intersection of all balls covering it. The first case I had already studied, though there was a gap in the proof of the non-compact case which is corrected here.
The main types of results are:
1. Sufficient conditions on the norm for the equality between the Herer expectation and the ball hull of the Aumann expectation.
2. Sufficient conditions on the kind of sets the random set takes on as values.
3. Inclusions valid without restricting either the norm or the possible values of the random set.
The paper is a very nice amalgam of random sets, bornological differentials, and Banach space geometry. I think it makes a convincing case of how all those ingredients fit together.
Up the line:
·On the equivalence of Aumann and Herer expectations of random sets (2008).
·Intersections of balls and the ball hull mapping (2010).
Down the line:
A paper on limit theorems is in the making.
To download the paper, click on the title or here.
A random set is a random element of a space of sets. Since the latter are not linear (in general, you cannot subtract a set from another), it is not possible to define a notion of expectation for random sets with all the properties of the usual expectation of random variables or vectors. Thus there exist many definitions of the expectation, with various properties.
Two very interesting definitions were proposed by Aumann and Herer. Aumann's expectation is defined by putting together the expectations of all selections of the random set (i.e. if a random variable/vector is taken by selecting one point of the random set, its expectation should be an element of the expectation of the random set). Thus it is analytical in that it relies on calculating integrals in the underlying linear space.
Herer's expectation is not analytical but geometrical, as it only uses the metric structure of the space. It is defined as the locus of all points which are closer to each point x than x is, in average, to the farthest point of the random set. In other words, call R(x) the radius of a ball centered in x that covers the random set, then the Herer expectation is the intersection of all balls with center any x and radius the expected value of R(x).
The aim of the paper is to study the relationships between those two notions. Since the Herer expectation is an intersection of balls, the simplest possibility is when it is either equal to Aumann's, or the intersection of all balls covering it. The first case I had already studied, though there was a gap in the proof of the non-compact case which is corrected here.
The main types of results are:
1. Sufficient conditions on the norm for the equality between the Herer expectation and the ball hull of the Aumann expectation.
2. Sufficient conditions on the kind of sets the random set takes on as values.
3. Inclusions valid without restricting either the norm or the possible values of the random set.
The paper is a very nice amalgam of random sets, bornological differentials, and Banach space geometry. I think it makes a convincing case of how all those ingredients fit together.
Up the line:
·On the equivalence of Aumann and Herer expectations of random sets (2008).
·Intersections of balls and the ball hull mapping (2010).
Down the line:
A paper on limit theorems is in the making.
To download the paper, click on the title or here.
Monday 23 September 2013
Jensen's inequality for random elements in metric spaces and some applications
P. Terán (2014). Journal of Mathematical Analysis and Applications 414, 756-766.
(By the way, this is my thirtieth paper.)
This paper has a funny story. I was invited to give a talk a couple of years ago. From the background of the people issuing the invitation, it looked clear that they had found my second paper with Ilya Molchanov. The situation was awkward, because it simply makes no sense to fly somebody from abroad to give you a two-hour lecture on a topic he has only written one paper about. So I expected that they would soon realize I was not fit for what they wanted and the invitation would be withdrawn (and that's what happened).
But in the meantime I grew increasingly concerned. What if they did know what they were doing? What if they just had an insane lot of money to spend? If I waited and the invitation never got withdrawn, I 'd have to show up and speak for two hours, and what was I going to say?
The topic of the original paper was the Law of Large Numbers for random elements of metric spaces. Under some axiomatic conditions on the way averages are constructed (maybe not via algebraic operations), we constructed an expectation operator and proved the LLN for it. It seemed to me that the first thing those people were going to ask me was: What are the properties of that expectation? Does it enjoy some of the nice properties of the expectation defined in less general spaces using Lebesgue or Bochner integrals? Unfortunately the paper, being a paper, had paid no attention to any properties unnecessary to acheive the paper's aim (proving the Law of Large Numbers).
I thought: I will prove Jensen's inequality for that expectation. That way they will realize that it is well-behaved and plausibly has more nice properties, even if I can't claim that it has.
Once it became clarified that the talk would not happen, I worked for some time on applications and called it a paper. It's fun because the paper's path is quite unusual: we prove Jensen's inequality from the Law of Large Numbers; then we prove a Dominated Convergence Theorem from Jensen's inequality; and then we prove a Monotone Convergence Theorem from the Dominated Convergence Theorem.
Abstract: Jensen's inequality is extended to metric spaces endowed with a convex combination operation. Applications include a dominated convergence theorem for both random elements and random sets, a monotone convergence theorem for random sets, and other results on set-valued expectations in metric spaces and on random probability measures. Some of the applications are valid for random sets as well as random elements, extending results known for Banach spaces to more general metric spaces.
Up the line:
·A law of large numbers in a metric space with a convex combination operation (2006, w. Ilya Molchanov). Downloadable from Ilya's website.
Down the line:
Nothing being prepared.
To download, click on the title or here.
(By the way, this is my thirtieth paper.)
This paper has a funny story. I was invited to give a talk a couple of years ago. From the background of the people issuing the invitation, it looked clear that they had found my second paper with Ilya Molchanov. The situation was awkward, because it simply makes no sense to fly somebody from abroad to give you a two-hour lecture on a topic he has only written one paper about. So I expected that they would soon realize I was not fit for what they wanted and the invitation would be withdrawn (and that's what happened).
But in the meantime I grew increasingly concerned. What if they did know what they were doing? What if they just had an insane lot of money to spend? If I waited and the invitation never got withdrawn, I 'd have to show up and speak for two hours, and what was I going to say?
The topic of the original paper was the Law of Large Numbers for random elements of metric spaces. Under some axiomatic conditions on the way averages are constructed (maybe not via algebraic operations), we constructed an expectation operator and proved the LLN for it. It seemed to me that the first thing those people were going to ask me was: What are the properties of that expectation? Does it enjoy some of the nice properties of the expectation defined in less general spaces using Lebesgue or Bochner integrals? Unfortunately the paper, being a paper, had paid no attention to any properties unnecessary to acheive the paper's aim (proving the Law of Large Numbers).
I thought: I will prove Jensen's inequality for that expectation. That way they will realize that it is well-behaved and plausibly has more nice properties, even if I can't claim that it has.
Once it became clarified that the talk would not happen, I worked for some time on applications and called it a paper. It's fun because the paper's path is quite unusual: we prove Jensen's inequality from the Law of Large Numbers; then we prove a Dominated Convergence Theorem from Jensen's inequality; and then we prove a Monotone Convergence Theorem from the Dominated Convergence Theorem.
Abstract: Jensen's inequality is extended to metric spaces endowed with a convex combination operation. Applications include a dominated convergence theorem for both random elements and random sets, a monotone convergence theorem for random sets, and other results on set-valued expectations in metric spaces and on random probability measures. Some of the applications are valid for random sets as well as random elements, extending results known for Banach spaces to more general metric spaces.
Up the line:
·A law of large numbers in a metric space with a convex combination operation (2006, w. Ilya Molchanov). Downloadable from Ilya's website.
Down the line:
Nothing being prepared.
To download, click on the title or here.
Sunday 14 April 2013
Distributions of random closed sets via containment functionals
P. Terán (2014). Journal of Nonlinear and Convex Analysis 15, 907-917.
A central problem in the theory of random sets is how to characterize the distribution of a random set in a simpler way. The fact that we are dealing with a random element of a space each point of which is a set implies that the distribution is defined on a sigma-algebra which is a set of sets of sets.
The standard road, initiated in the seventies by e.g. Kendall and Matheron (but already travelled in the opposite direction in the fifties by Choquet) is to describe a set by a number of 0-1 characteristics, typically whether it hits (i.e. intersects) or not each element of a family of test sets. This gives us the hitting functional defined on the test sets (a set of sets, one order of magnitude simpler) as the hitting probability of the random set.
The classical assumptions on the underlying space are: locally compact, second countable and Hausdorff (this implies the existence of a separable complete metric). That is enough for applications in Rd but seems insufficiently general to live merrily ever after. In contrast, the theory of probability measures in metric spaces was well developed without local compactness about half a century ago.
Molchanov's book includes three proofs of the Choquet-Kendall-Matheron theorem, and it is fascinating how all three break down in totally different ways if local compactness is dropped.
This paper is an attempt at finding a new path of proof that avoids local compactness. I failed but ended up succeeding in replacing second countability by sigma-compactness, which (under local compactness) is strictly weaker. Sadly, I didn't know how to handle some problems and had to opt for sigma-compactness after believing for some time that I had a correct proof in locally compact Hausdorff spaces.
Regarding the assumption I initially set out to defeat, all I can say for the moment is that now there are four proofs that break down in non-locally-compact spaces.
Up the line:
This starts a new line.
Down the line:
Some work awaits its moment to be typed.
To download, click on the title or here.
A central problem in the theory of random sets is how to characterize the distribution of a random set in a simpler way. The fact that we are dealing with a random element of a space each point of which is a set implies that the distribution is defined on a sigma-algebra which is a set of sets of sets.
The standard road, initiated in the seventies by e.g. Kendall and Matheron (but already travelled in the opposite direction in the fifties by Choquet) is to describe a set by a number of 0-1 characteristics, typically whether it hits (i.e. intersects) or not each element of a family of test sets. This gives us the hitting functional defined on the test sets (a set of sets, one order of magnitude simpler) as the hitting probability of the random set.
The classical assumptions on the underlying space are: locally compact, second countable and Hausdorff (this implies the existence of a separable complete metric). That is enough for applications in Rd but seems insufficiently general to live merrily ever after. In contrast, the theory of probability measures in metric spaces was well developed without local compactness about half a century ago.
Molchanov's book includes three proofs of the Choquet-Kendall-Matheron theorem, and it is fascinating how all three break down in totally different ways if local compactness is dropped.
This paper is an attempt at finding a new path of proof that avoids local compactness. I failed but ended up succeeding in replacing second countability by sigma-compactness, which (under local compactness) is strictly weaker. Sadly, I didn't know how to handle some problems and had to opt for sigma-compactness after believing for some time that I had a correct proof in locally compact Hausdorff spaces.
Regarding the assumption I initially set out to defeat, all I can say for the moment is that now there are four proofs that break down in non-locally-compact spaces.
Up the line:
This starts a new line.
Down the line:
Some work awaits its moment to be typed.
To download, click on the title or here.
Monday 18 March 2013
Laws of large numbers without additivity
P. Terán (2014). Transactions of the American Mathematical Society 366, 5431-5451.
In this paper, a law of large numbers is presented in which the probability measure is replaced by a set function satisfying weaker properties. Instead of the union-intersection formula of probabilities, only complete monotony (a one-sided variant) is assumed. Further, the continuity property for increasing sequences is assumed to hold for open sets but not for general Borel sets.
Many results of this kind have been published in the last years, with very heterogenous assumptions. This seems to be the first result where no extra assumption is placed on the random variables (beyond integrability, of course).
The paper also presents a number of examples showing that the behaviour of random variables in non-additive probability spaces can be quite different. For example, the sample averages of a sequence of i.i.d. variables with values in [0,2] can
-converge almost surely to 0
-have `probability' 0 of being smaller than 1
-converge in law to a non-additive distribution supported by the whole interval [0,1].
Up the line:
This starts a new line.
Down the line:
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).
To download, click on the title or here.
In this paper, a law of large numbers is presented in which the probability measure is replaced by a set function satisfying weaker properties. Instead of the union-intersection formula of probabilities, only complete monotony (a one-sided variant) is assumed. Further, the continuity property for increasing sequences is assumed to hold for open sets but not for general Borel sets.
Many results of this kind have been published in the last years, with very heterogenous assumptions. This seems to be the first result where no extra assumption is placed on the random variables (beyond integrability, of course).
The paper also presents a number of examples showing that the behaviour of random variables in non-additive probability spaces can be quite different. For example, the sample averages of a sequence of i.i.d. variables with values in [0,2] can
-converge almost surely to 0
-have `probability' 0 of being smaller than 1
-converge in law to a non-additive distribution supported by the whole interval [0,1].
Up the line:
This starts a new line.
Down the line:
·Non-additive probabilities and the laws of large numbers (plenary lecture 2011).
To download, click on the title or here.
Non-additive probabilities and the laws of large numbers (in Spanish)
P. Terán (2011).
These are the slides of my plenary lecture at the Young Researchers Congress celebrating the centennial of Spain's Royal Mathematical Society (in Spanish).
You can read a one-page abstract here at the conference website.
Up the line:
·Laws of large numbers without additivity (201x). The slides essentially cover this paper, with context for an audience of non-probabilists.
Down the line:
Some papers have been submitted.
To download, click on the title or here.
These are the slides of my plenary lecture at the Young Researchers Congress celebrating the centennial of Spain's Royal Mathematical Society (in Spanish).
You can read a one-page abstract here at the conference website.
Up the line:
·Laws of large numbers without additivity (201x). The slides essentially cover this paper, with context for an audience of non-probabilists.
Down the line:
Some papers have been submitted.
To download, click on the title or here.
Centrality as a gradual notion: A new bridge between fuzzy sets and statistics
P. Terán (2011). International Journal of Approximate Reasoning 52, 1243-1256.
According to one point of view, fuzzy set theoretical notions are problematic unless they can be justified as / explained from / reduced to ordinary statistics and probability. I can't say that this makes much sense to me.
In this paper the opposite route is taken, which is fun. It subverts that view by writing a similar paper in which statistical/probabilistic notions are reduced to fuzzy ones. The point is: So what?
A fuzzy set of central points of a probability distribution with respect to a family of fuzzy reference events is defined. Its fuzzy set theoretical interpretation is very natural: the membership degree of x equals the truth value of the proposition "Every reference event containing x is probable".
Also natural location estimators are the points whose membership in that fuzzy set is maximal. The paper presents many examples of known notions from statistics and probability arising as maximally central estimators (of a distribution or, more generally, of a family of distributions). The prototype of a maximally central estimator is the mode (taking the singletons as reference events), and MCEs can thus be seen as generalized modes.
From the paper's abstract: "This framework has a natural interpretation in terms of fuzzy logic and unifies many known notions from statistics, including the mean, median and mode, interquantile intervals, the Lorenz curve, the halfspace median, the zonoid and lift zonoid, the coverage function and several expectations and medians of random sets, and the Choquet integral against an infinitely alternating or infinitely monotone capacity."
Up the line:
This starts a new line.
Down the line:
·Connections between statistical depth functions and fuzzy sets (2010).
A long paper on statistical consistency has been submitted.
To download, click on the title or here.
According to one point of view, fuzzy set theoretical notions are problematic unless they can be justified as / explained from / reduced to ordinary statistics and probability. I can't say that this makes much sense to me.
In this paper the opposite route is taken, which is fun. It subverts that view by writing a similar paper in which statistical/probabilistic notions are reduced to fuzzy ones. The point is: So what?
A fuzzy set of central points of a probability distribution with respect to a family of fuzzy reference events is defined. Its fuzzy set theoretical interpretation is very natural: the membership degree of x equals the truth value of the proposition "Every reference event containing x is probable".
Also natural location estimators are the points whose membership in that fuzzy set is maximal. The paper presents many examples of known notions from statistics and probability arising as maximally central estimators (of a distribution or, more generally, of a family of distributions). The prototype of a maximally central estimator is the mode (taking the singletons as reference events), and MCEs can thus be seen as generalized modes.
From the paper's abstract: "This framework has a natural interpretation in terms of fuzzy logic and unifies many known notions from statistics, including the mean, median and mode, interquantile intervals, the Lorenz curve, the halfspace median, the zonoid and lift zonoid, the coverage function and several expectations and medians of random sets, and the Choquet integral against an infinitely alternating or infinitely monotone capacity."
Up the line:
This starts a new line.
Down the line:
·Connections between statistical depth functions and fuzzy sets (2010).
A long paper on statistical consistency has been submitted.
To download, click on the title or here.
Monday 21 February 2011
Algebraic, metric and probabilistic properties of convex combinations based on the t-normed extension principle: the Strong Law of Large Numbers
P. Terán (2013). Fuzzy Sets and Systems 223, 1-25.
I think this is a nice paper. It combines many things and, being a 42-page manuscript to prove one theorem, it brings an `epic' culmination to part of my earlier research.
The general framework is that of convex combination spaces, an attempt by Ilya Molchanov and myself (2006 below) at an axiomatic treatment of expectation in metric spaces taking the convex combination of points as the basic operation.
Consider fuzzy sets in such a metric space. The convex combination operation in the carrier space can be uplifted to the superspace of fuzzy sets by using one of many extension principles in the literature. These extension devices differ in the choice of a continuous triangular norm (a special ordered topological semigroup in [0,1] ).
I had already showed that, with an appropriate topology, any such extension device satisfies the Strong Law of Large Numbers (in a finite-dimensional space; 2008 below). The fact that the limit in the SLLN varies with the triangular norm was the motivation to go for the abstract result of convex combination spaces.
The problem is for which triangular norms the SLLN holds in another topology which is the strongest in the literature. It was known that the minimum yields such an SLLN, and the product does not.
It turns out that the stronger SLLN is characterized
-algebraically: SLLN iff the triangular norm is eventually idempotent.
-metrically: SLLN iff the extension to fuzzy sets retains the property of being a convex combination space.
A nice result!
Up the line:
·On limit theorems for t-normed sums of fuzzy random variables (2004).
·A law of large numbers in a metric space with a convex combination operation (2006, w. Ilya Molchanov). Downloadable from Ilya's website.
·Probabilistic foundations for measurement modelling with fuzzy random variables (2007).
·On convergence in necessity and its laws of large numbers (2008).
·Strong law of large numbers for t-normed arithmetics (2008).
Down the line:
There remains a short coda.
To download, click here.
I think this is a nice paper. It combines many things and, being a 42-page manuscript to prove one theorem, it brings an `epic' culmination to part of my earlier research.
The general framework is that of convex combination spaces, an attempt by Ilya Molchanov and myself (2006 below) at an axiomatic treatment of expectation in metric spaces taking the convex combination of points as the basic operation.
Consider fuzzy sets in such a metric space. The convex combination operation in the carrier space can be uplifted to the superspace of fuzzy sets by using one of many extension principles in the literature. These extension devices differ in the choice of a continuous triangular norm (a special ordered topological semigroup in [0,1] ).
I had already showed that, with an appropriate topology, any such extension device satisfies the Strong Law of Large Numbers (in a finite-dimensional space; 2008 below). The fact that the limit in the SLLN varies with the triangular norm was the motivation to go for the abstract result of convex combination spaces.
The problem is for which triangular norms the SLLN holds in another topology which is the strongest in the literature. It was known that the minimum yields such an SLLN, and the product does not.
It turns out that the stronger SLLN is characterized
-algebraically: SLLN iff the triangular norm is eventually idempotent.
-metrically: SLLN iff the extension to fuzzy sets retains the property of being a convex combination space.
A nice result!
Up the line:
·On limit theorems for t-normed sums of fuzzy random variables (2004).
·A law of large numbers in a metric space with a convex combination operation (2006, w. Ilya Molchanov). Downloadable from Ilya's website.
·Probabilistic foundations for measurement modelling with fuzzy random variables (2007).
·On convergence in necessity and its laws of large numbers (2008).
·Strong law of large numbers for t-normed arithmetics (2008).
Down the line:
There remains a short coda.
To download, click here.
Thursday 23 September 2010
Connections between statistical depth functions and fuzzy sets
P. Terán (2010). In: Combining Soft Computing and Statistical Methods in Data Analysis (C.Borgelt et al., editors), 611--618. Springer, Berlin.
[Proceedings of the 5th Intl. Conf. on Soft Methods in Statistics and Probability]
[Invited session Probabilistic aspects of fuzzy sets]
Abstract: We show that two probabilistic interpretations of fuzzy sets via random sets and large deviation principles have a common feature: they regard the fuzzy set as a depth function of a random object. Conversely, some depth functions in the literature can be regarded as the fuzzy sets of central points of appropriately chosen random sets.
Up the line:
·Centrality as a gradual notion: a new bridge between Fuzzy Sets and Statistics (submitted)
Down the line:
Subsequent work will be typed somewhere in time.
To download, click on the title or here.
[Proceedings of the 5th Intl. Conf. on Soft Methods in Statistics and Probability]
[Invited session Probabilistic aspects of fuzzy sets]
Abstract: We show that two probabilistic interpretations of fuzzy sets via random sets and large deviation principles have a common feature: they regard the fuzzy set as a depth function of a random object. Conversely, some depth functions in the literature can be regarded as the fuzzy sets of central points of appropriately chosen random sets.
Up the line:
·Centrality as a gradual notion: a new bridge between Fuzzy Sets and Statistics (submitted)
Down the line:
Subsequent work will be typed somewhere in time.
To download, click on the title or here.
Thursday 18 June 2009
On consistency of stationary points of stochastic optimization problems in a Banach space
P. Terán (2010). Journal of Mathematical Analysis and Applications 363, 569-578.
A well-known technique for stochastic optimization problems in which one aims to optimize the expected value of a function is called the Sample Average Approximation (SAA). The expectation is replaced by the average over a sample, and the solution of the SAA problem is close to that of the original one. But if the function to optimize is non-convex and non-smooth, solving the SAA can be hard itself and one may want to proceed by finding candidate points.
That's why one may care about whether natural candidates for the SAA problem are close to candidates for the original problem. Shapiro and Xu (2007) studied it in the finite-dimensional setting, and Balaji and Xu (2008) presented an infinite-dimensional generalization. Balaji and Xu made three assumptions on the space or on the function. We show that the result holds without any of them.
A number of tools with independent interest have had to be developed which seem far from the problem, including a law of large numbers and a Komlós Theorem for random weak* compact sets. That made the paper fun to work on.
Up the line:
·On a uniform law of large numbers for random sets and subdifferentials of random functions (2008).
Down the line:
Nothing for the moment.
To download, click on the title or here.
A well-known technique for stochastic optimization problems in which one aims to optimize the expected value of a function is called the Sample Average Approximation (SAA). The expectation is replaced by the average over a sample, and the solution of the SAA problem is close to that of the original one. But if the function to optimize is non-convex and non-smooth, solving the SAA can be hard itself and one may want to proceed by finding candidate points.
That's why one may care about whether natural candidates for the SAA problem are close to candidates for the original problem. Shapiro and Xu (2007) studied it in the finite-dimensional setting, and Balaji and Xu (2008) presented an infinite-dimensional generalization. Balaji and Xu made three assumptions on the space or on the function. We show that the result holds without any of them.
A number of tools with independent interest have had to be developed which seem far from the problem, including a law of large numbers and a Komlós Theorem for random weak* compact sets. That made the paper fun to work on.
Up the line:
·On a uniform law of large numbers for random sets and subdifferentials of random functions (2008).
Down the line:
Nothing for the moment.
To download, click on the title or here.
Monday 20 April 2009
Probabilistic foundations for measurement modelling with fuzzy random variables
P. Terán (2007). Fuzzy Sets and Systems 158, 973-986.
For some reason, I never wrote an entry for this paper when I started this blog.
It appeared in the FSS special issue Selected papers from IFSA 2005, 11th World Congress of International Fuzzy Systems Association, for which 7 papers were selected out of the 340 conference communications (I made it into the 2% cut, showing that events with probability zero do happen. To me, it was already a big success to make it into an invited session in Something's World Congress.)
It shows how to use fuzzy random variables to model measurements, in the most simple situation: the final estimate of the measurand is the average of the measurements. The measurand is assumed to be crisp; fuzziness appears due to uncertainty in measurement. Uncertainty is propagated using a t-normed extension principle with an Archimedean t-norm.
The paper points out a lot of things that remain to be done. This research was nice but it's hard for me to go on with it after I realized that I didn't know how to persuade a practitioner that this theoretical framework was simple enough to deserve their consideration.
Up the line:
·Strong law of large numbers for t-normed arithmetics (2008).
Down the line:
There are ideas for a sequel which I planned to submit to a metrology journal but, as I said, I've never found the words to convince them that it's worth reading.
To download, click on the title or here. There are some typos I hope were caught in proof-editing.
For some reason, I never wrote an entry for this paper when I started this blog.
It appeared in the FSS special issue Selected papers from IFSA 2005, 11th World Congress of International Fuzzy Systems Association, for which 7 papers were selected out of the 340 conference communications (I made it into the 2% cut, showing that events with probability zero do happen. To me, it was already a big success to make it into an invited session in Something's World Congress.)
It shows how to use fuzzy random variables to model measurements, in the most simple situation: the final estimate of the measurand is the average of the measurements. The measurand is assumed to be crisp; fuzziness appears due to uncertainty in measurement. Uncertainty is propagated using a t-normed extension principle with an Archimedean t-norm.
The paper points out a lot of things that remain to be done. This research was nice but it's hard for me to go on with it after I realized that I didn't know how to persuade a practitioner that this theoretical framework was simple enough to deserve their consideration.
Up the line:
·Strong law of large numbers for t-normed arithmetics (2008).
Down the line:
There are ideas for a sequel which I planned to submit to a metrology journal but, as I said, I've never found the words to convince them that it's worth reading.
To download, click on the title or here. There are some typos I hope were caught in proof-editing.
Tuesday 7 April 2009
On the equivalence of Aumann and Herer expectations of random sets
P. Terán (2008). Test 17, 505-514.
Trying to understand when the Herer expectation of a random set falls into the abstract definition of expectation in A law of large numbers in a metric space with a convex combination operation unexpectedly triggered a line of research on the Herer expectation itself.
This paper characterizes those separable Banach spaces where the Herer expectation is identical with the Aumann expectation, thus solving an open problem in Ilya Molchanov's book Theory of random sets as you can read here.
A few other results are presented: for instance, if the dual space is separable then the Aumann expectation is the intersection of the Herer expectations with respect to all equivalent norms. Nice!
It must be mentioned that the proof of the main theorem contains a gap and so applies only to Hausdorff approximable random sets. That does not compromise the solution of the problem as stated in Ilya's book. A different proof will be presented in forthcoming work.
Up the line:
·A law of large numbers in a metric space with a convex combination operation (w. Ilya Molchanov). Downloadable from Ilya's website.
Down the line:
·Intersections of balls and the ball hull mapping (2010).
·Expectations of random sets in Banach spaces (accepted 2013).
To download, click on the title or here. This preprint differs in minor details from the published version.
Trying to understand when the Herer expectation of a random set falls into the abstract definition of expectation in A law of large numbers in a metric space with a convex combination operation unexpectedly triggered a line of research on the Herer expectation itself.
This paper characterizes those separable Banach spaces where the Herer expectation is identical with the Aumann expectation, thus solving an open problem in Ilya Molchanov's book Theory of random sets as you can read here.
A few other results are presented: for instance, if the dual space is separable then the Aumann expectation is the intersection of the Herer expectations with respect to all equivalent norms. Nice!
It must be mentioned that the proof of the main theorem contains a gap and so applies only to Hausdorff approximable random sets. That does not compromise the solution of the problem as stated in Ilya's book. A different proof will be presented in forthcoming work.
Up the line:
·A law of large numbers in a metric space with a convex combination operation (w. Ilya Molchanov). Downloadable from Ilya's website.
Down the line:
·Intersections of balls and the ball hull mapping (2010).
·Expectations of random sets in Banach spaces (accepted 2013).
To download, click on the title or here. This preprint differs in minor details from the published version.
Friday 3 April 2009
Intersections of balls and the ball hull mapping
P. Terán (2010). Journal of Convex Analysis 17, 277-292.
Although there is no probability in it, this paper is in fact part of a strand of research in random set theory. The driving question is as follows: what is the relationship between the Aumann and the Herer expectation of a random set? Since Aumann's definition relies on integration in Banach spaces whereas Herer's is geometric, the mere fact that definite relationships exist is interesting.
I proved that both expectations are identical if and only if the space has the Mazur Intersection Property that every closed bounded convex set can be written as the intersection of a family of balls (this happened in a paper in Test which, I realize now, I haven't uploaded here yet).
However, to reply the very next question (`But are they related at all when they are not identical?') I needed to study some properties of those sets which can be written as intersections of balls, and of the `ball hull', the analog of the convex hull in this context. Which is what this paper is about.
All the reasoning in the paper is elementary so it makes a nice starting point for further research.
Up the line:
·On the equivalence of Aumann and Herer expectations of random sets (2008).
Down the line:
·Expectations of random sets in Banach spaces (accepted 2013).
To download, click on the title or here.
Although there is no probability in it, this paper is in fact part of a strand of research in random set theory. The driving question is as follows: what is the relationship between the Aumann and the Herer expectation of a random set? Since Aumann's definition relies on integration in Banach spaces whereas Herer's is geometric, the mere fact that definite relationships exist is interesting.
I proved that both expectations are identical if and only if the space has the Mazur Intersection Property that every closed bounded convex set can be written as the intersection of a family of balls (this happened in a paper in Test which, I realize now, I haven't uploaded here yet).
However, to reply the very next question (`But are they related at all when they are not identical?') I needed to study some properties of those sets which can be written as intersections of balls, and of the `ball hull', the analog of the convex hull in this context. Which is what this paper is about.
All the reasoning in the paper is elementary so it makes a nice starting point for further research.
Up the line:
·On the equivalence of Aumann and Herer expectations of random sets (2008).
Down the line:
·Expectations of random sets in Banach spaces (accepted 2013).
To download, click on the title or here.
Wednesday 7 May 2008
On convergence in necessity and its laws of large numbers
P. Terán (2008). In: Soft Methods for Handling Variability and Imprecision, 289--296. Springer, Berlin.
[Proceedings of the 4th Intl. Conf. on Soft Methods in Statistics and Probability]
An interesting question is what happens to random variables when the probability measure is replaced by a non-additive measure. That topic has been intermitently studied in the fuzzy literature for over 20 years, and has also received attention from economy theorists.
I've had a few ideas about that for some time. I took the opportunity to give an invited lecture at the University of Extremadura to put them in order and start writing a paper -which, however, won't be ready until the long awaited 36-hour-day regulations will be enforced.
This paper presents some convergence results, in an attempt to clarify the difference between LLNs for possibilistic variables and LLNs for their distributions identified with fuzzy sets.
It also shows that usual techniques, relying on shape assumptions related to the t-norm generators, can be effectively replaced by other techniques independent of the particularities of the t-norm modelling the interactivity between the variables.
Up the line:
Strong law of large numbers for t-normed arithmetics (2008)
Down the line:
An evolution of this paper with proofs and new results will be typed later this year. (I don't think that will happen. It's 2013 now and I doubt a journal would be excited to publish a full-length version of a 2008 conference paper. Why bother typing it then?- PT, Mar 18th 2013)
To download, click on the title or here.
[Proceedings of the 4th Intl. Conf. on Soft Methods in Statistics and Probability]
An interesting question is what happens to random variables when the probability measure is replaced by a non-additive measure. That topic has been intermitently studied in the fuzzy literature for over 20 years, and has also received attention from economy theorists.
I've had a few ideas about that for some time. I took the opportunity to give an invited lecture at the University of Extremadura to put them in order and start writing a paper -which, however, won't be ready until the long awaited 36-hour-day regulations will be enforced.
This paper presents some convergence results, in an attempt to clarify the difference between LLNs for possibilistic variables and LLNs for their distributions identified with fuzzy sets.
It also shows that usual techniques, relying on shape assumptions related to the t-norm generators, can be effectively replaced by other techniques independent of the particularities of the t-norm modelling the interactivity between the variables.
Up the line:
Strong law of large numbers for t-normed arithmetics (2008)
Down the line:
An evolution of this paper with proofs and new results will be typed later this year. (I don't think that will happen. It's 2013 now and I doubt a journal would be excited to publish a full-length version of a 2008 conference paper. Why bother typing it then?- PT, Mar 18th 2013)
To download, click on the title or here.
Thursday 24 January 2008
Teoremas de aproximación y convergencia para funciones y conjuntos aleatorios
P. Terán (2002). Ph. D. Thesis.
This is my thesis, defended 10 March 2003. It's in Spanish, so it won't be so helpful to most people. The title is `Approximation and convergence theorems for random sets and random functions'.
Some parts of it never appeared in paper form (mostly but not only, the less better ones).
The main lines are:
-Korovkin-type approximation theorems for set-valued and fuzzy-valued functions.
-Random approximation of set-valued mappings.
-Strong law of large numbers for t-normed sums of fuzzy random variables.
To download, click on the title or here.
This is my thesis, defended 10 March 2003. It's in Spanish, so it won't be so helpful to most people. The title is `Approximation and convergence theorems for random sets and random functions'.
Some parts of it never appeared in paper form (mostly but not only, the less better ones).
The main lines are:
-Korovkin-type approximation theorems for set-valued and fuzzy-valued functions.
-Random approximation of set-valued mappings.
-Strong law of large numbers for t-normed sums of fuzzy random variables.
To download, click on the title or here.
Tuesday 13 November 2007
Strong law of large numbers for t-normed arithmetics
P. Terán (2008). Fuzzy Sets and Systems 159, 343-360.
This paper was conceived and written in late 2001 and early 2002. For years, I stubbornly tried to have it published in a Statistics & Probability journal. A tale of this epic yet unfruitful quest was told in my personal blog (in Spanish).
I finally quit and submitted it to FSS, where I knew knowledgeable reviewers would be used.
Young people do all sorts of unrewarding things.
Up the line:
This started a new line.
Down the line:
A lot of subsequent material found its way to publication before the paper itself did.
·On limit theorems for t-normed sums of fuzzy random variables (2004).
·A law of large numbers in a metric space with a convex combination operation (2006, w. Ilya Molchanov). You may download it from Ilya's website.
·Probabilistic foundations for measurement modelling with fuzzy random variables (2007).
There is a further paper on the making, quite interesting.
To download, click on the title or here.
This paper was conceived and written in late 2001 and early 2002. For years, I stubbornly tried to have it published in a Statistics & Probability journal. A tale of this epic yet unfruitful quest was told in my personal blog (in Spanish).
I finally quit and submitted it to FSS, where I knew knowledgeable reviewers would be used.
Young people do all sorts of unrewarding things.
Up the line:
This started a new line.
Down the line:
A lot of subsequent material found its way to publication before the paper itself did.
·On limit theorems for t-normed sums of fuzzy random variables (2004).
·A law of large numbers in a metric space with a convex combination operation (2006, w. Ilya Molchanov). You may download it from Ilya's website.
·Probabilistic foundations for measurement modelling with fuzzy random variables (2007).
There is a further paper on the making, quite interesting.
To download, click on the title or here.
Wednesday 31 October 2007
On a uniform law of large numbers for random sets and subdifferentials of random functions
P. Terán (2008). Statistics and Probability Letters 78, 42-49.
While spending a night in the (South) Tenerife Airport, and that means a lot of time to kill, I read a JMAA paper by Alexander Shapiro and Huifu Xu where they obtained a strong law of large numbers for subdifferentials of random functions as a tool for consistency analysis of stationary points in non-convex non-smooth stochastic optimization (math jargon rules, doesn't it?)
Their LLN, however sufficient for their purpose, depended on a sort of `blurring radius' parameter r>0 and so didn't cover the `exact' case r=0 unless the functions be additionally assumed to be continuous.
That looked like a perfect benchmark for the abstract LLN Ilya Molchanov and I had proved. It turned out that the strongest case r=0 held under upper semicontinuity (provided a separability condition on the range of the multifunction) or even weaker conditions, showing that continuity in fact played no role in the problem.
Up the line:
A law of large numbers in a metric space with a convex combination operation (w. Ilya Molchanov). You may download a (non-final) preprint copy from Ilya's website.
Down the line:
On consistency of stationary points of stochastic optimization problems in a Banach space (200x).
To download, click here. This is *the* good version; SPL has published their own version which I don't endorse in any way.
While spending a night in the (South) Tenerife Airport, and that means a lot of time to kill, I read a JMAA paper by Alexander Shapiro and Huifu Xu where they obtained a strong law of large numbers for subdifferentials of random functions as a tool for consistency analysis of stationary points in non-convex non-smooth stochastic optimization (math jargon rules, doesn't it?)
Their LLN, however sufficient for their purpose, depended on a sort of `blurring radius' parameter r>0 and so didn't cover the `exact' case r=0 unless the functions be additionally assumed to be continuous.
That looked like a perfect benchmark for the abstract LLN Ilya Molchanov and I had proved. It turned out that the strongest case r=0 held under upper semicontinuity (provided a separability condition on the range of the multifunction) or even weaker conditions, showing that continuity in fact played no role in the problem.
Up the line:
A law of large numbers in a metric space with a convex combination operation (w. Ilya Molchanov). You may download a (non-final) preprint copy from Ilya's website.
Down the line:
On consistency of stationary points of stochastic optimization problems in a Banach space (200x).
To download, click here. This is *the* good version; SPL has published their own version which I don't endorse in any way.
Friday 26 October 2007
A continuity theorem for cores of random closed sets
P. Terán (2008). Proceedings of the American Mathematical Society. 136, 4417-4425.
The starting point is Zvi Artstein's 1983 paper on distributions of random sets. Among other things, he proved that, if a sequence of distributions of random compact sets converges weakly in the Hausdorff metric, their cores (or sets of distributions of selections) also converge in the Hausdorff metric defined in the space of compact sets of distributions. The proof is a quite laborious one, and I have never been able to read it through.
On a rainy summer afternoon I killed some time by proving it using the Skorokhod representation theorem. I worked a bit harder and adapted the new proof to get an extension to the unbounded case in locally compact separable Hausdorff spaces with the Fell topology. Then I checked Artstein again and saw that he already knew that (using the one-point compactification-- quite smarter than me). I realized that, to get something publishable, I would need a proof of the general unbounded case.
By Christmas, I had that general proof. It involves some results and notions from Hyperspace Topology which were developed in the nineties. It is a quite symphonic proof, with a large number of elements assembled together in a very nice way.
Up the line:
This starts a new line.
Down the line:
Nothing yet. I have some material I will finish and prepare for publication as soon as 36-hour days are available.
To download, click on the title or here.
The starting point is Zvi Artstein's 1983 paper on distributions of random sets. Among other things, he proved that, if a sequence of distributions of random compact sets converges weakly in the Hausdorff metric, their cores (or sets of distributions of selections) also converge in the Hausdorff metric defined in the space of compact sets of distributions. The proof is a quite laborious one, and I have never been able to read it through.
On a rainy summer afternoon I killed some time by proving it using the Skorokhod representation theorem. I worked a bit harder and adapted the new proof to get an extension to the unbounded case in locally compact separable Hausdorff spaces with the Fell topology. Then I checked Artstein again and saw that he already knew that (using the one-point compactification-- quite smarter than me). I realized that, to get something publishable, I would need a proof of the general unbounded case.
By Christmas, I had that general proof. It involves some results and notions from Hyperspace Topology which were developed in the nineties. It is a quite symphonic proof, with a large number of elements assembled together in a very nice way.
Up the line:
This starts a new line.
Down the line:
Nothing yet. I have some material I will finish and prepare for publication as soon as 36-hour days are available.
To download, click on the title or here.
Subscribe to:
Posts (Atom)