Autor(es):
Plakhov, Alexander
; Cruz, João Pedro Antunes Ferreira da
Data: 2009
Identificador Persistente: http://hdl.handle.net/10773/6176
Origem: RIA - Repositório Institucional da Universidade de Aveiro
Assunto(s): Stochastic approximation; Accelerated convergence; Step size adaptation
Descrição
An algorithm of searching a zero of an unknown function $\vphi : \,
\R \to \R$ is considered: $\, x_{t} = x_{t-1} - \gamma_{t-1} y_t$,\,
$t=1,\ 2,\ldots$, where $y_t = \varphi(x_{t-1}) + \xi_t$ is the
value of $\vphi$ measured at $x_{t-1}$ and $\xi_t$ is the
measurement error. The step sizes $\gam_t > 0$ are modified in the
course of the algorithm according to the rule: $\, \gamma_t =
\min\{u\, \gamma_{t-1},\, \mstep\}$ if $y_{t-1} y_t
> 0$, and $\gamma_t = d\, \gamma_{t-1}$, otherwise, where $0 < d <
1 < u$,\, $\mstep > 0$. That is, at each iteration $\gam_t$ is
multiplied either by $u$ or by $d$, provided that the resulting
value does not exceed the predetermined value $\mstep$. The function
$\vphi$ may have one or several zeros; the random values $\xi_t$ are
independent and identically distributed, with zero mean and finite
variance. Under some additional assumptions on $\vphi$, $\xi_t$, and
$\mstep$, the conditions on $u$ and $d$ guaranteeing a.s.
convergence of the sequence $\{ x_t \}$, as well as a.s. divergence,
are determined. In particular, if $\P (\xi_1 > 0) = \P (\xi_1 < 0) =
1/2$ and $\P (\xi_1 = x) = 0$ for any $x \in \R$, one has
convergence for $ud < 1$ and divergence for $ud > 1$. Due to the
multiplicative updating rule for $\gam_t$, the sequence $\{ x_t \}$
converges rapidly: like a geometric progression (if convergence
takes place), but the limit value may not coincide with, but
instead, approximates one of the zeros of $\vphi$. By adjusting the
parameters $u$ and $d$, one can reach arbitrarily high precision of
the approximation; higher precision is obtained at the expense of
lower convergence rate.