*Journal of Mathematical Analysis and Applications*

**363**, 569-578.

A well-known technique for stochastic optimization problems in which one aims to optimize the expected value of a function is called the Sample Average Approximation (SAA). The expectation is replaced by the average over a sample, and the solution of the SAA problem is close to that of the original one. But if the function to optimize is non-convex and non-smooth, solving the SAA can be hard itself and one may want to proceed by finding candidate points.

That's why one may care about whether natural candidates for the SAA problem are close to candidates for the original problem. Shapiro and Xu (2007) studied it in the finite-dimensional setting, and Balaji and Xu (2008) presented an infinite-dimensional generalization. Balaji and Xu made three assumptions on the space or on the function. We show that the result holds without any of them.

A number of tools with independent interest have had to be developed which seem far from the problem, including a law of large numbers and a Komlós Theorem for random weak* compact sets. That made the paper fun to work on.

Up the line:

·On a uniform law of large numbers for random sets and subdifferentials of random functions (2008).

Down the line:

Nothing for the moment.

To download, click on the title or here.

## No comments:

Post a Comment