0$ is fixed, under assumptions ($\ref{eq:qclose}$),
($\ref{eq:qsmall}$) and ($\ref{eq:prob}$).
As in the previous proof, we
use Esseen's Inequality (Lemma $\ref{essen}$) to prove that $F_1$ and
$F_2$ can both be approximated in distribution by Gaussian random
variables with appropriate mean and variance. For $F_1$ this can be
done by setting (using the notation of Lemma $\ref{essen}$)
$$ X_j = T_j - \frac{1}{f(x-1+j)} \; (j=1,2,3,\dots)$$
and again noting that there are no problems in applying the Lemma to
this infinite sequence of random variables. Since
\begin{eqnarray*}\sum_{j=x}^{+\infty}\mbox{Var}[X_j] & = \sum_{n=x}^{+\infty}\frac{1}{f(n)^2} & = S_2(x),\\
\sum_{j=x}^{+\infty}E[|X_j|^3] & = \ub{\sum_{n=x}^{+\infty}\frac{1}{f(n)^3}}& = \ub{S_3(x)}\end{eqnarray*}
and by assumption ($\ref{eq:qsmall}$), for $r=2,3$,
$$S_r(x) = S_r(a+\lambda q(a)) = (1+\ub{\er(a)})S_r(a),$$
the error term in Esseen's inequality is of the order of
$$L = \frac{S_3(x)}{S_2(x)^{3/2}} = (1+\ub{\er(a)})\frac{S_3(a)}{S_2(a)^{3/2}} = \ub{\er(a)}.$$
This implies that the distribution of $F_1$ is $\ub{\er(a)}$-close to the distribution of a normal random variable with mean and variance given by
\begin{equation}E[F_1] = S_1(x)\mbox{ and } \mbox{Var}[F_1] = S_2(x) = (1+\ub{\er(a)}) S_2(a).\end{equation}
A analogous statement holds for $F_2$. As a result, the distribution
of $F_1-F_2$ is $\ub{\er(a)}$ close to that of a normal random
variable with mean and variance given by
$$\mu = E[F_1] - E[F_2] = - \sum_{n=a-\lambda q(a)}^{a+\lambda q(a)-1} \frac{1}{f(n)} = - (1+\ub{\er(a)})\frac{2\lambda q(a)}{f(a)},$$
$$\sigma^2 = \mbox{Var}[F_1] + \mbox{Var}[F_2] = (1+\ub{\er(a)})2S_2(a).$$
It follows that
$$\mon(x,y) = \Pr(F_1-F_2<0) = \Phi\left(-\frac{\mu}{\sigma}\right) + \ub{\er(a)}.$$
By ($\ref{eq:qclose}$) and the definition of $q_0$
$$-\frac{\mu}{\sigma} = (1+\ub{\er(a)})\frac{2\lambda q_0(a)}{f(a)\sqrt{2S_2(a)}} = (1+\ub{\er(a)})\lambda.$$
The above finally implies
$$\mon(x,y) = \Phi\left((1+\ub{\er(a)})\lambda\right) + \ub{\er(a)} = \Phi(\lambda)+\ub{\er(a)},$$
finishing the proof.\end{proof}
\subsection{The general result}\label{sub:final}
Let $f:\mathbb N\to (0,+\infty)$ be a a feedback function
(i.e. positive and non-decreasing). Letting $g(n)=\ln f(n)$, $g$ can be
easily extended to a piecewise affine function over all positive real
numbers by linear interpolation. As a result, all feedback functions
$f$ can be extended to piecewise smooth functions on the positive real
numbers. That is the class of functions to which Theorem
$\ref{thm:final}$ applies.
\begin{theorem}\label{thm:final}
Assume that a function $f$ is a positive, non-decreasing\footnote{Condition ($\ref{eq:hconst}$) implies that $f=f(x)$ is in fact increasing in $x$ for $x$ big enough.}, piecewise smooth
function defined on the positive real numbers, and assume that it
satisfies ($\ref{eq:monop}$). Define $g(x)=\ln f(x)$ and
$h(x)=xg'(x)$, where $g'$ is the right derivative of $g$. Assume that
\begin{equation}
\label{eq:hconst}\liminf_{x\to+\infty}h(x) > \frac{1}{2}\mbox{, }\;\lim_{x\to+\infty}g'(x) = \lim_{x\to +\infty}\frac{h(x)}{x} = 0,
\end{equation}
and also that there is a constant $C>0$ such that for all $0<\e<1/2$
and all $x$ big enough
\begin{equation}\label{eq:hslow}\sup_{x\leq t\leq x^{1+\e}}\left|\frac{h(t)}{h(x)}-1\right|\leq C\e .
\end{equation}
It then holds that $\sqrt{\frac{a}{4h(a)-2}}$ is the scale
function of the balls-and-bins process with feedback function $f$.
That is, if $$q(a)\sim \sqrt{\frac{a}{4h(a)-2}} \mbox{ as }a\to +\infty$$ then for any fixed $\lambda > 0$ the probability
of monopoly by bin $1$ in such a process started from state
$(x,y)=\left(a+\lambda q(a), a-\lambda q(a)\right)$ converges to
$\Phi(\lambda)$ as $a\to +\infty$.
\end{theorem}
\begin{proof}
We shall check that the conditions of Lemma $\ref{thm2}$ are
satisfied. The crucial step in checking these conditions is to
estimate $S_2(n)$ and $S_3(n)$, which we accomplish by
evaluating corresponding integrals.
Let $r\geq 2$ and define
$$I_r(a) = \int_a^{+\infty} \frac{dx}{f(x)^r} = \int_{a}^{+\infty}\frac{dx}{e^{rg(x)}}.$$
In what follows we will prove that
$$S_r(a)\sim I_r(a)\sim \frac{a}{(rh(a)-1)f(a)^r}\mbox{ as }a\to +\infty.$$
By integration by parts,
$$I_r(a) = \left. \frac{x}{e^{rg(x)}}\right]_{x=a}^{x=+\infty} + r\int_{a}^{+\infty} \frac{xg'(x)\; dx}{e^{rg(x)}} = - \frac{a}{f(a)^r} + r\int_{a}^{+\infty} \frac{h(x)\; dx}{e^{rg(x)}}.$$
Here we have used the fact that
\begin{equation}\label{eq:ffast} f(x)^r \gg x \mbox{ as }x\to +\infty \mbox{ for } r\geq 2,\end{equation}
which can be deduced from the fact that $\liminf_{x\to+\infty}h(x)>\frac 1 2$. We now make use of the following claim, which we prove subsequently.
\begin{claim}\label{claim1}As $a\to +\infty$
\begin{equation}\label{eq:claim1}\int_{a}^{+\infty} \frac{h(x)\; dx}{e^{rg(x)}}\sim h(a)\int_a^{+\infty}\frac{dx}{e^{rg(x)}} = h(a)I_r(a)\end{equation}\hfill\QED\end{claim}
Claim $\ref{claim1}$ implies that $a\to +\infty$
$$I_r(a) = -\frac{a}{f(a)^r} +
(1+o(1))rh(a)\int_{a}^{+\infty}\frac{dx}{e^{rg(x)}} = - \frac{a}{f(a)}
+ (1+o(1))rh(a)I_r(a).$$
Assumption ($\ref{eq:hconst}$) tells us that $rh(a)>1$ for $r\geq 2$
and $a$ big enough. This permits us to write
$$I_r(a) = (1+o(1))\frac{a}{(rh(a)-1)f(a)^r}.$$
Since by (\ref{eq:hconst}), $a \gg h(a)$, we have
$$I_r(a) \gg \frac{1}{f(a)^r}.$$
Noting that $|S_r(a)-I_r(a)|\leq \frac{1}{f(a)^r}$, we can finally conclude
\begin{equation}\label{eq:estimate}S_r(a) \sim I_r(a) \sim \frac{a}{(rh(a)-1)f(a)^r} \mbox{ as }a\to+\infty\mbox{ }(r\geq 2).\end{equation}
This gives us the asymptotic form of $S_2$ and $S_3$ as in Lemma $\ref{thm2}$. Moreover, we can compute
$$q_0(n) = f(n)\sqrt{\frac{S_2(n)}{2}}\sim \sqrt{\frac{n}{4h(n)-2}}.$$
All that remains to be shown is that the assumptions of Lemma
$\ref{thm2}$ hold in this case. For convenience we simply show
that $\er(a)=o(1)$. To this end, we let
$$q(n) \sim \sqrt{\frac{n}{4h(n)-2}} \mbox{ as }n\to +\infty$$
and note that this guarantees the validity of ($\ref{eq:qclose}$). To finish the proof, we show that as $a\to +\infty$
\begin{equation}S_3(a)\ll S_2(a)^{3/2};\end{equation}
\begin{equation}\label{eq:finallambda}\forall \lambda > 0 \; f(a \pm \lambda q(a))\sim f(a).\end{equation}
The first of these equations follows from ($\ref{eq:estimate}$) and equation ($\ref{eq:hconst}$) ($\frac{a}{h(a)}\gg 1$).
$$S_3(a) \sim \frac{a}{(3h(a)-1)f(a)^3} \ll S_2(a)^{3/2} \sim
\frac{1}{f(a)^3}\left(\frac{a}{2h(a)-1}\right)^{3/2}.$$
To prove ($\ref{eq:finallambda}$), fix an arbitrary $\lambda>0$. By the definition of $h$,
$$\left|g(a\pm\lambda q(a)) - g(a)\right| \leq \left|\int_{a}^{a\pm\lambda q(a)}h(t)\frac{dt}{t}\right|\leq \ln\left(\frac{a+\lambda q(a)}{a-\lambda q(a)}\right)\left\{ \sup_{a-\lambda q(a)\leq t\leq a+\lambda q(a)}h(t)\right\}.$$
Since $q(a)=\ub{\sqrt a}$, ($\ref{eq:hslow}$) implies
$$\sup_{a-\lambda q(a)\leq t\leq a+\lambda q(a)} h(t) \sim h(a).$$
We conclude (again using $q(a)=\ub{\sqrt a}$) that
$$\left|g(a\pm\lambda q(a)) - g(a)\right| \sim h(a) \ln\left(\frac{a+\lambda q(a)}{a-\lambda q(a)}\right)= \ub{\frac{h(a)}{a} q(a)} = \ub{\sqrt{\frac{h(a)}{a}}}=o(1),$$
because $a\gg h(a)$ by ($\ref{eq:hconst}$). Hence
$$ \frac{f(a\pm \lambda q(a))}{f(a)} = e^{g(a\pm \lambda q(a))-g(a)} = e^{o(1)}$$
This proves ($\ref{eq:finallambda}$) and finishes the proof.\end{proof}
To conclude, we now prove Claim $\ref{claim1}$.
\begin{proof}[of Claim $\ref{claim1}$] We first show that for any fixed $0<\e<\frac 1 2$, as $a\to +\infty$
\begin{equation}\label{eq:firstclaim}\frac{\int_{a}^{a^{1+\e}}\frac{h(x)\; dx}{e^{rg(x)}} }{\int_{a}^{+\infty}\frac{h(x)\; dx}{e^{rg(x)}}} \sim 1\end{equation}
A change of variables permits us to rewrite
\begin{equation}\label{eq:firstlim} \int_{a^{1+\e}}^{+\infty}\frac{h(x)\; dx}{e^{rg(x)}} = (1+\e) \int_{a}^{+\infty}\frac{h(u^{1+\e})u^{\e}\; du}{e^{rg(u^{1+\e})}}.\end{equation}
Equation ($\ref{eq:hslow}$) implies that for all $u$ big enough,
$h(u^{1+\e})\leq (1+C\e)h(u)$. Moreover, ($\ref{eq:hconst}$) allows us
to choose an $a$ such that $h(u)\geq h_0>\frac{1}{2}$ for all $u\geq a$,
which implies
$$g(u^{1+\e}) - g(u) = \int_{u}^{u^{1+\e}}g'(u)du \geq \inf_{t\geq a}h(t)\int_{u}^{u^{1+\e}}\frac{du}{u} = h_0 \e\ln u.$$
We therefore find
\begin{eqnarray}
\label{eq:growfast}
e^{rg(u^{1+\e})} \geq u^{rh_0\e}e^{rg(u)}.\end{eqnarray}
Also note $rh_0\e > \e$.
Plugging this into ($\ref{eq:firstlim}$) yields the following estimate as
$a\to +\infty$
$$\int_{a^{1+\e}}^{+\infty}\frac{h(x)\; dx}{e^{rg(x)}} \leq (1+\e)(1+C\e)\int_{a}^{+\infty}\frac{h(u)u^{\e}\; du}{e^{rg(u)}u^{rh_o\e}}= \ub{a^{\e-rh_0\e}}\int_{a}^{+\infty}\frac{h(u)\; du}{e^{rg(u)}}.$$
By ($\ref{eq:growfast}$) this implies
$$\int_{a}^{+\infty}\frac{h(x)\; dx}{e^{rg(x)}} \sim \int_{a}^{a^{1+\e}}\frac{h(x)\; dx}{e^{rg(x)}} $$
as stated. Now note that, by assumption ($\ref{eq:hslow}$) on $h$,
$$(1-C\e)h(a){\int_{a}^{a^{1+\e}}\frac{dx}{e^{rg(x)}}}\leq {\int_{a}^{a^{1+\e}}\frac{h(x)\; dx}{e^{rg(x)}}}\leq (1+C\e)h(a){\int_{a}^{a^{1+\e}}\frac{dx}{e^{rg(x)}}}$$
and by a similar reasoning as above
$${\int_{a}^{a^{1+\e}}\frac{dx}{e^{rg(x)}}}\sim {\int_{a}^{+\infty}\frac{dx}{e^{rg(x)}}}.$$
Putting these facts together finishes the proof of the claim.\end{proof}
\section{Final remarks}
We have provided a full description of scaling behavior of the
probability of monopoly for a broad class of feedback functions
satisfying condition ($\ref{eq:monop}$), which corresponds to $p>1$ in
the $f(n)=n^p$ case. One is tempted to ask whether similar results
hold in the $01$ case can in fact be defined for all $p>1/2$.
It turns out \cite{Khanin2} that any feedback function $f$ satisfying
\begin{equation}\label{eq:elead}\sum_{n=1}^{+\infty} \frac{1}{f(n)^2}<+\infty\end{equation}
yields a process such that with probability 1, one of the bins
has more balls than the other at all sufficiently large
times. In forthcoming work, Oliveira and Spencer \cite{OS} prove that,
if $f(n)=n^p$, $p>1/2$, the probability a bin obtains {\em eventual
leadership} has a standard Gaussian limit precisely at the
$\lambda \sqrt{\frac{a}{4p-2}}$ scale, and similar results hold in the
general context of Theorem $\ref{thm:final}$ if assumption
($\ref{eq:monop}$) is dropped. They also show that the limit of the
{\em leadership} probability, which is defined to be the probability
that bin $1$ has more balls at all
subsequent times, is $2\Phi(\lambda)-1$ under the same scaling.
Many other natural questions remain open. For instance, are our
methods applicable to related non-linear models for Web
graphs \cite{DFM}? It seems likely that this problem requires
improvements on the error bounds for Gaussian approximation, and our
numerical data suggests that this is indeed possible. However, it is
also conceivable that large deviation bounds are enough for treating
many related problems. Finally, direct combinatorial proofs
(i.e. without resort to the exponential random variables) of the
current results presented here would also be of great
interest.
\begin{thebibliography}{99}
\bibliographystyle{plain}
% \bibliography{stealingbib}
\bibitem{BA}
B. Arthur.
\newblock {\bf Increasing Returns and Path Dependence
in the Economy}.
\newblock The University of Michigan Press, 1994.
\bibitem{Davis}
B. Davis. Reinforced Random Walks. {\em Probability
Theory
and Related Fields}, 84:203-229, 1990.
\bibitem{DFM}
E. Drinea, A. Frieze, and M. Mitzenmacher.
\newblock Balls and Bins Models with Feedback.
\newblock In {\em Proceedings of 13th Annual ACM-SIAM
Symposium
on Discrete Algorithms}, pp. 308-315, 2002.
\bibitem{Khanin2}
K. Khanin and R. Khanin.
A probabilistic model for establishment of neuron
polarity.
Technical Report HPL-BRIMS-2000-16, June 2000.
\bibitem{Petrov}
V. Petrov.
\newblock {\bf Limit Theorems of Probability Theory}.
Oxford University Press, 1995.
\bibitem{OS}
R. Oliveira and J. Spencer.
In preparation.
\bibitem{SW}
J. Spencer and N. Wormald.
\newblock Explosive processes. Draft manuscript.
\end{thebibliography}
\end{document}