\documentclass[11pt]{article}
\begin{document}
%%%%% written in LaTeX %%%
\addtolength{\oddsidemargin}{-1.25cm}
\addtolength{\textwidth}{3cm}
\addtolength{\textheight}{4cm}
\addtolength{\topmargin}{-2cm}
\newcommand{\een}{\end{enumerate}}
\newcommand{\noi}{\noindent}
\newcommand{\ra}{\rightarrow}
\newcommand{\sig}{\sigma}
\newcommand{\eps}{\epsilon}
\newcommand{\del}{\delta}
\newcommand{\Del}{\Delta}
\newcommand{\lam}{\lambda}
\newcommand{\ah}{\alpha}
\newcommand{\gam}{\gamma}
\newcommand{\ben}{\begin{enumerate}}
\newcommand{\beq}{\begin{equation}}
\newcommand{\eeq}{\end{equation}}
\newcommand{\bec}{\begin{center}}
\newcommand{\ece}{\end{center}}
\newtheorem{conjecture}{Conjecture}
\newtheorem{definition}{Definition}
\newtheorem{define}{Definition}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{claim}{Claim}[theorem]
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{guess}[theorem]{Conjecture}
\bec {\Large\bf The Giant Component: The Golden Anniversary}
\\ Joel Spencer \ece
\section{Paul Erd\H{o}s and Alfr\'ed R\'enyi}
Oftentimes the beginnings of a mathematical area are obscure
or disputed. The subject of {\em Random Graphs}
had, however, a clear beginning and it occurred fifty years ago.
Alfr\'ed R\'enyi (1921-1970) was head of the Hungarian Mathematical Institute
which today bears his name. R\'enyi had a great love of literature and
philosophy, a true Renaissance man. His life, in the words of Paul Tur\'an,
was one of intense and creative involvement in the exchange of ideas and
in public affairs.
His mathematics centered on Probability Theory. Paul
Erd\H{o}s (1913-1996) was a central figure in twentieth century
mathematics. This author recalls well the Memorial Conference
organized by his long-time friend and collaborator Vera S\'os in
1999, at which fifteen plenary lecturers discussed his contributions.
What was so surprising to us all was the sheer breadth of his work,
which spanned so many vital areas of mathematics. To this very
prejudiced author, it is his work in Discrete Mathematics that
is having the greatest lasting impact. In this area, Erd\H{o}s
tended toward asymptotic questions, a style very much relevant
to today's world of $o$, $O$, $\Theta$ and $\Omega$. For both
Erd\H{o}s and R\'enyi, mathematics was a collaborative enterprise.
They both had numerous coauthors, and they wrote $32$ joint
papers.
In 1960 they produced \cite{ER} their masterwork,
{\em On the Evolution of Random Graphs}
They considered the following process.
Begin with $n$ vertices and no edges and add edges randomly
(that is, uniformly from among the potential edges) one by one.
Let $G[n,e]$ be the state when there are $e$ edges. Of course,
$G[n,e]$ could be any graph with $n$ vertices and $e$ edges and
technically is the uniform probability distribution over all
such graphs. Erd\H{o}s and R\'enyi analyzed the typical behavior
of $G[n,e]$ as $e$ ``evolved" from $0$ to ${n\choose 2}$.
Their interest, and ours, lies in the asymptotic behavior of
this process as $n$ goes to infinity.
When $e$ approaches and passes $\frac{n}{2}$ the random graph
undergoes a phase transition, as described in Theorem \ref{2jumpthm}
below.
A typical computer run on a million vertices illustrates this
rapid change. At $e=400000$ edges
the size of the
largest component is only $168$. By
$e=600000$ edges the size of the largest component
has exploded to $309433$.
Let $G(n,p)$ denote the probability space over
graphs on $n$ vertices where each pair is adjacent with independent
probability $p$. (We may make a ``thought experiment" in which
each pair of vertices flips a coin to decide if the pair is adjacent.)
Such graphs have very close to a proportion $p$
of the edges. The behaviors of $G[n,e]$ and $G(n,p)$, where
$e= p{n\choose 2}$ are asymptotically the same for all the
topics we discuss here. While Erd\H{o}s and R\'enyi analyzed
$G[n,e]$, most modern work gives results in the $G(n,p)$ format
and we shall describe results in that format.
We shall parameterize $p=\frac{c}{n}$. The graph with $\frac{n}{2}$
edges then corresponds to $c=1$.
Let $C_1,C_2$ denote the largest
and second largest components in the graph with $|C_i|$ denoting
their number of vertices.
We define the {\em complexity}
of a component with $V$ vertices and $E$ edges as $E-V+1$.
Trees and unicyclic graphs have complexity $0$ and $1$ respectively
and are called {\em simple}.
\begin{theorem}\label{2jumpthm}
[Erd\H{o}s-R\'enyi] The behavior of $G(n,p)$ with
$p=\frac{c}{n}$ can be broken into three regions.
\\ {\tt Subcritical} $c<1$: All components are simple and very small,
$|C_1|=O(\ln n)$.
\\ {\tt Critical} $c=1$: $|C_1|=\Theta(n^{2/3})$ A delicate situation!
\\ {\tt Supercritical} $c>1$: $|C_1|\sim yn$ where $y=y(c)$ is the
positive solution to the equation
\beq\label{y(c)}
e^{-cy}=1-y
\eeq
$C_1$ has high complexity.
All other components are simple and very small, $|C_2|=O(\ln n)$.
\end{theorem}
\noi {\tt Remark:} The notation above
allows the statements to be a little more brief than they would
be otherwise. For example,
$|C_1|\sim yn$ means that for
all $\eps > 0$ the limit as $n\ra\infty$ of the probability
that $(y-\eps)n < |C_1| < (y+\eps)n$ is one.
\noi {\tt Remark:} The average degree of a vertex is $p(n-1)\sim c$.
The critical behavior takes
place when the average degree reaches one.
\noi {\tt Remark:} Elementary, albeit challenging, calculus gives that
(\ref{y(c)}) has a unique positive solution $y=y(c)$ when $c>1$. Further,
parametrizing $c=1+\eps$ and letting $\eps$ approach zero from above:
\beq\label{y(c)limit}
y(1+\eps)\sim 2\eps
\eeq
When $c>1$, Erd\H{o}s and R\'enyi called $C_1$ the {\em giant
component}. There are two salient features of the giant
component: its existence and its uniqueness. Their system
does not have a Jupiter and an also huge Saturn, it is
more like Jupiter and Ceres. Several books \cite{AS},\cite{Bela},
\cite{JLR} give very full discussions.
\section{Francis Galton and Henry Watson}
In 1873 Francis Galton initiated the modern theory of branching
processes, posing the following problem in the Educational Times:
\begin{quote}
A large nation, of whom we will only concern ourselves with
adult males, $N$ in number, and who each bear separate surnames
colonise a district. Their law of population is such that, in
each generation, $a_0$ percent of the adult males have no male
children who reach adult life; $a_1$ have one such male child;
$a_2$ have two; and so on up to $a_5$ who have five. Find (1) what
proportion of their surnames will have become extinct after $r$
generations; and (2) how many instances there will be of the
surname being held by $m$ persons.
\end{quote}
Peter Jagers comments \cite{jagers} \begin{quote} Rarely does a mathematical
problem convey so much of the flavour of its time,
colonialism and male supremacy hand in hand, as well as the
underlying concern for a diminished fertility of noble families,
paving the way for the crowds from the genetically dubious
lower classes. \end{quote}
The challenge was taken up by Henry Watson. Watson was a clergyman
who, not unlike his more illustrious contemporary Charles Dodgson
\footnote{Better known under his pen name Lewis Carroll}, had
keen mathematical abilities. He considers a process beginning
with a single node which we will, for balance, call Eve.
Eve has $k$ children with probability $a_k$.
These children then have children
independently with the same distribution and the process
continues through the generations. These processes are now
called Galton-Watson processes.
We shall here assume that
the number of Eve's children is Poisson with mean $c$.
That is,
Eve has $k$ children with probability $e^{-c}c^k/k!$.
Let $T_c$ be the total
population generated. There is a precise formula
\beq \label{precise} \Pr[T_c=k] = \frac{e^{-ck}(ck)^{k-1}}{k!} \eeq
However, it is also possible the $T_c$ is infinite.
\begin{theorem}\label{G-W} The Galton-Watson process has three regions.
\\ {\tt Subcritical} $c<1$: $T_c$ is finite with probability one.
and $E[T_c]=\sum_{k=0}^{\infty}c^k = \frac{1}{1-c}$
\\ {\tt Critical} $c=1$: $T_c$ is finite with probability one
but has infinite expectation. \footnote{This author recalls
in undergraduate days first seeing a finite random variable
with infinite expectation and thinking it was a very funny
and totally anomalous creation. Wrong! Such variables
occur frequently at critical points in percolation processes.}
\\ {\tt Supercritical} $c>1$. $T_c$ is infinite with probability $y=y(c)$
given by (\ref{y(c)}).
\end{theorem}
With $c>1$ let $z=1-y$ be the probability Eve generates an finite
tree. If Eve has $k$ children the full tree will be finite if and
only if all of the children generate finite trees, which has
probability $z^k$. Thus
\beq\label{zeq} z = \sum_{k=0}^{\infty} e^{-c}\frac{c^k}{k!}z^k \eeq
and some manipulation gives that $y=1-z$ satisfies (\ref{y(c)}).
Galton and Watson \cite{GW} noted that $z=1,y=0$ is a solution to
(\ref{zeq}) and deduced, incorrectly, that the Galton-Watson
process dies with probability one. But when $c>1$ there is
another solution $z<1$ and that, we now know, gives the
correct probability.
\subsection{Erd\H{o}s meets Galton}
Fix a vertex $v$ in $G(n,p)$ with $p=\frac{c}{n}$. The component of $v$
may be found by what is known as a Breadth First Search (BFS) algorithm.
One first finds the neighbors of $v$. Then for each such neighbor $w$
we find its neighbors. Then for each such second generation $u$ we find
its neighbors and so on. What occurs on this random graph?
$v$ has
Binomial Distribution $B[n-1,p]$ neighbors, asymptotically Poisson with
mean $c$. The neighbors $w$ of $v$ then have Poisson $c$ new neighbors, and so on.
The component of $v$ is approximated by the Galton-Watson process. Sometimes.
For $c<1$ this approximation works well. But for $c>1$ the Galton-Watson
process may go on forever while the component of $v$ can have at most
$n$ vertices. What goes wrong with the approximation?
BFS requires {\em new} vertices. After $\delta n$ vertices have
been found the new distribution is Binomial $B[n(1-\delta),p]$ which
is Poisson with mean $c(1-\delta)$. The success of BFS causes $\delta$
to rise which makes it harder to find new vertices leading the process
to eventually die. This has the colorful term {\em ecological limitation}.
The effect of the ecological limitation is only
felt after a positive proportion $\delta n$ of vertices have been
found. Consider BFS from each vertex $v$. With probability $1-y$
the process will die early, giving a small component. But for
$\sim yn$ the process will not die early. All of these vertices
have their components merge into the giant component.
\subsection{Jupiter without Saturn}
Why can we not have Jupiter and Saturn, two components both of
size bigger than $\delta n$? This would be
highly unstable. Each additional edge would have probability
at least $(\delta n)^2/{n\choose 2}\sim 2\delta^2$ of merging them.
High instability and nonexistence are not the same. Indeed, while
there are many proofs of the uniqueness of the giant component we
do not know one that is both simple and rigorous.
\section{The Critical Window}
Erd\H{o}s and R\'enyi normally repressed their enthusiasm in
their formal writings. But not now!
\begin{quote}
This double ``jump'' in the size of the largest component
when $\frac{e}{n}$ passes the value $\frac{1}{2}$ is one of
the most striking facts concerning random graphs. \cite{ER}
\end{quote}
In the 1980s, spearheaded by the work of B\'ela
Bollob\'as \cite{BB} and Tomasz {\L}uczak \cite{TL},
the value $c=1$ was stretched out and a Critical Window
was found. The stretching was done by adding a second order term.
The correct parameterization is
\beq \label{window} p = \frac{1}{n} + \frac{\lam}{n^{4/3}} \eeq
Now there are three regions:
\\ {\tt Barely Subcritical} $pn\sim 1$ and
$\lam$ is a function of
$n$ which approaches $-\infty$:
All components are simple. $|C_1|\sim |C_2|$, their sizes
increasing with $\lam$.
\\ {\tt The Critical Window} $pn\sim 1$ and $\lam$ is a constant:
Here (and only here!) we have chaotic behavior, distributions
instead of almost sure behavior. Parametrizing $|C_i|=X_in^{2/3}$
the $X_i$ ($i=1,2$ and beyond) have a nontrivial distribution
and a nontrivial joint distribution. The complexity $Y_i$ of
$C_i$ also has a nontrivial distribution.
\\ {\tt Barely Supercritical} $pn\sim 1$ and
$\lam$ is a function of
$n$ which approaches $+\infty$:
$|C_1|\gg n^{2/3} \gg |C_2|$. $C_1$ is the {\em dominant
component}, much bigger than $C_2$ but still small. $C_1$
has high complexity but all other components are simple.
Stirling's Formula applied to (\ref{precise}) with $c=1$ gives
$\Pr[T_1=k] \sim (2\pi)^{-1/2}k^{-3/2}$ and $\Pr[T_1\geq k]
\sim 2(2\pi)^{-1/2}k^{-1/2}$.
Now consider $G(n,p)$ with
$pn=1$ and let $C(v)$ be the component containing $v$. Call $C(v)$
large if its size is at least $Kn^{2/3}$ and let $Z$ be the number
of $v$ with $C(v)$ large.
Estimating
$|C(v)|$ by $T_1$ would give that $C(v)$ is large with probability
$2(2\pi)^{-1/2}K^{-1/2}n^{-1/3}$ and so $Z$ would have expectation
$2(2\pi)^{-1/2}K^{-1/2}n^{2/3}$. The actual value of $E[Z]$ is somewhat
smaller due to the ecological limitation, but let us assume it as a heuristic.
If any large component exists every vertex of it would be in
a large component so that $Z$ would be at least $Kn^{2/3}$.
When $K$ is large $E[Z]$ is much lower than $Kn^{2/3}$ so
that with high probability there would be no large component.
Conversely, when $K$ is a small positive constant we expect
many components of size bigger than $Kn^{2/3}$.
\subsection{A Strange Physics}
To understand evolution inside the Critical Window we
set $p=n^{-1}+ \lam n^{-4/3}$ and consider $\lam$ (ranging
over all real numbers) as a parametrized time.
Let $c_in^{2/3}$
be the size of the $i$-th largest component at
$p=n^{-1}+ \lam n^{-4/3}$.
Let $\Delta\lam$ be an ``infinitesimal" and increase time
$\lam$ by $\Delta\lam$. There are $(c_in^{2/3})(c_jn^{2/3})$ potential
edges that would merge $C_i,C_j$ and each is added with probability
$(\Delta\lam)n^{-4/3}$. The $n$ factors cancel: $C_i, C_j$ merge
to form a component of size $(c_i+c_j)n^{2/3}$ with probability
$c_ic_j(\Delta\lam)$. This gravitational attraction merges the
large components and forms the dominant component. We can include
the complexity in this model. When $C_i,C_j$ with complexities
$r_i,r_j$ merge, the new component has complexity $r_i+r_j$.
Further, each $C_i$ has $\sim \frac{1}{2}c_i^2n^{4/3}$ potential
internal edges. In the infinitesimal time $\Delta\lam$ with
probability $c_i^2(\Delta\lam)/2$ such an edge is added and the
complexity of $C_i$ is incremented by one. Over time, the
complexities get larger and larger. The limiting process, called
the multiplicative coalescent process, has interesting connections
to Brownian motion. \cite{Aldous}
\subsection{A Computer Exercise}
\begin{center}
\begin{tabular}{rrrrrrrrrrrrrrrrrr}
$\cdot$ & -4 & N & -3 & N& -2 & N & -1 & N & 0 & N & +1 & N & +2 & N & +3 & N & +4 \\
\hline \hline
0& 0.14&1 & 0.18 &0& 0.24& 1& 0.28 &0 & 0.37 &0 & 0.82 &0 & 1.16 &0 & 4.21 &0& 5.88\\
1& 0.10&2& 0.16&1& 0.19&2& 0.26&1& 0.36&1& 0.39&1& 0.86&0& 0.22&0& 0.24 \\
2& 0.10&3& 0.13&3& 0.13&4& 0.19&3& 0.26&0& 0.28&3& 0.49&0& 0.13&0& 0.12 \\
3& 0.09&4& 0.12&4& 0.13&0& 0.16&2& 0.21&2& 0.23&4& 0.46&0& 0.12&0& 0.10 \\
4& 0.07&0& 0.09&5& 0.13&3& 0.14&5& 0.16&0& 0.20&2& 0.32&0& 0.11&2& 0.10 \\
5& 0.07&0& 0.08&7& 0.09&5& 0.12&4& 0.15&4& 0.14&5& 0.16&1& 0.10&1& 0.10 \\
6& 0.07&5& 0.06&6& 0.09&7& 0.10&8& 0.12&5& 0.12&2& 0.12&3& 0.09&1& 0.10 \\
7& 0.06&8& 0.06&2& 0.08&6& 0.09&-& 0.12&3& 0.10&1& 0.11&4& 0.09&4& 0.10 \\
8& 0.05&-& 0.06&8& 0.06&9& 0.09&-& 0.10&3& 0.10&7& 0.09&2& 0.09&0& 0.08 \\
9& 0.05&9& 0.05&-& 0.06&0& 0.07&-& 0.10&9& 0.10&0& 0.08&5& 0.08&6& 0.07
\end{tabular}
\end{center}
Computer experimentation vividly shows the rapid development
in the critical window. In the above run \footnote{Thanks to
Juliana Freire} we begin with $n=10^5$ vertices and no edges.
At each step a random edge is added and a Union-Find algorithm
is used to keep track of component sizes.
We parameterize the number of edges as
$e= {n\choose 2}(n^{-1}+\lam n^{-4/3})$ and take ``snapshots"
at $\lam=-4,-3,\ldots,+4$. The ten largest components sizes
(listed $0,\ldots, 9$ here, and divided by $n^{2/3}$) are
given for each $\lam$.
At $\lam = +2$ there is
a $1.16$ Jupiter and $0.86$ Saturn.
The next digit, under $N$, gives the new ranking ($-$ if not in
the top ten) for that component for the next $\lam$. Components
$0,1,2,3,4$ have $N=0$ meaning they have all merged by $\lam = +3$.
At $\lam=3$ Jupiter has blown up to $4.21$. (Smaller components have
also joined Jupiter, explaining the discrepancy in the sum.) The
size of the
second largest component has {\em decreased} (it is the component
formerly ranked $5$) to a $0.22$ Ceres.
\subsection{Inside the Critical Window} At $\lam=-4$ there
is a ``jostling for position" among the top components, while by
$\lam = +4$ a dominant component has emerged.
The last time the largest component loses
that distinction occurs \cite{TL} during the critical window.
At $\lam=-4$
all components are simple while by $\lam = +4$ the dominant component
has high complexity. Complexity at least $4$ is necessary for
nonplanarity.
Planarity is lost \cite{LW}
in the critical window.
In a masterful work \cite{JKLP} the development
of complex components is studied. One exceptionally striking result:
the probability that the evolution ever simultaneously has two complex
components is, asymptotically, $\frac{5}{18}\pi$.
\subsection{Classical Bond Percolation}
Mathematical physicists examine $Z^d$ as a lattice, the pairs
$\vec{v},\vec{w}$ that are one apart are called {\em bonds}.
They imagine that each bond is {\em occupied} with independent
probability $p$. (For graph theorists, the occupied bonds form a
random subgraph of $Z^d$.) The occupied components then form
clusters, or components.
There is a critical probability, denoted
$p_c$ (dependent on $d$) so that:
\\ {\tt Subcritical} $p p_c$: There is precisely one infinite
component.
\par There are
natural analogies between this infinite model and the asymptotic
Erd\H{o}s-R\'enyi model. Infinite size corresponds to $\Omega(n)$
while finite size corresponds to $O(\ln n)$. There is particular
interest in
$p$ being very close to $p_c$.
Let $f(p)$ be the probability that $\vec{0}$
(or, by symmetry, any particular $\vec{v}$) lies in an infinite
component. The {\em critical exponent} $\beta$ is that
\footnote{$\beta$ might not exist, but all mathematical physicists
assume it does.}
real number such $f(p_c+x)\sim x^{\beta+o(1)}$ as $x\ra 0^+$.
$p_c+x$ corresponds to $pn=1+x$ and $f$ to the probability that
a given vertex $v$ lies in the giant component or, equivalently,
the proportion $y(1+x)$ of vertices in the giant component. As (\ref{y(c)limit})
$y(1+x)\sim 2x = x^{1+o(1)}$ the $\beta$ value for the Erd\H{o}s-R\'enyi
model is considered $1$. This is known to be the $\beta$ value in
$Z^d$ for all $d\geq 19$. Grimmett \cite{Gri}
gives many other critical exponents and in all cases the analogous
value for the Erd\H{o}s-R\'enyi model matches the known value in
high dimensional space.
Mathematical physicists loosely use the term {\em mean field
behavior} to describe percolation phenomenon in high dimensions
and the Erd\H{o}s-R\'enyi model has this mean field behavior.
\section{Recent Results}
Today it is recognized that percolation and the critical window
appear in many guises. Here is a highly subjective description
of recent work. We generally give simplified versions.
\subsection{Random 2-SAT} We generate $m$ random clauses $C_1,\ldots,C_m$
on Boolean variables $x_1,\ldots,x_n$. That is, each
clause $C=y\vee z$ with $y,z$ drawn randomly from
$\{x_1,\overline{x_1},\ldots,x_n,\overline{x_n}\}$. We ask if all
$C_i$ can be simultaneously satisfied. The answer changes from
yes to no in the critical window $m= n + \lam n^{2/3}$. \cite{2SAT}
\subsection{$d$-regular graph} Let $G_n$ be a sequence of transitive
$d$-regular graphs. Under reasonable conditions $p_c=\frac{1}{d-1}$
acts as the critical probability for a random subgraph of $G_n$.
For $pp_c$ there is a
giant component. More delicately, at $p=p_c$ the largest component has
size $\Theta(n^{2/3})$. The scaling $p(d-1) = 1 + \lam n^{-1/3}$ acts
as the critical window. \cite{Nachmias}
\subsection{An Improving Walk} Consider an infinite walk starting
at $W_0=1$ with $W_t=W_{t-1}+X_t-1$ where $X_t$ is Poisson with mean
$\frac{t}{n}$. When $W_t=0$ (crashes) it is reset to $W_t=1$.
When $t=\frac{1-\eps}{n}$ the walk has negative drift and crashes
repeatedly.
When $t=\frac{1+\eps}{n}$ the walk has positive drift and goes
to infinity.
The walk will crash for the last time in the critical window
$t = n + \lam n^{2/3}$. \cite{Jul}
\subsection{A First Order Phase Transition} Modify the Erd\H{o}s-R\'enyi
evolution as follows. Each round an edge is added to $G$, initially
empty. {\em Two} random pairs $\{u,v\}$, $\{w,x\}$ are given.
Add that pair for which the product of the component sizes of the
two vertices is smaller. This provides a powerful anti-gravity that
deters large components from joining. Parameterizing $e=t\frac{n}{2}$
edges chosen (so that $t=1$ is the critical value in the Erd\H{o}s-R\'enyi
evolution) the giant component occurs at $t\sim 1.77$. More
interesting, extensive computer simulation (but no mathematical proof!)
indicates strongly that when the critical value is reached there
is a first order phase transition. That is, let $t=t_c$ be the critical
probability and let $f(t)$ be the proportion of vertices in the largest
component at ``time" $t$. Then the limit of $f(t)$ as $t$ approaches
$t_c$ from above appears not to be zero, but rather something like $0.6$.
\cite{Science}
\subsection{General Critical Points} Let $G$ be a graph on $n$ vertices.
Let $d_v$ denote the degree of vertex $v$ and
set $d^* = (\sum_v d_v^2)(\sum_v d_v)$.
Note that $d^*$ gives the
average degree of a vertex if one first selects an edge uniformly and
then one of its vertices uniformly. Let $G_p$ denote the random
subgraph of $G$, accepting each edge with independent probability $p$.
Then, under certain mild conditions on $G$, $p= \frac{1}{d^*}$ is
the critical point in the evolution of $G_p$. When
$p = \frac{1-\eps}{d^*}$, $G_p$ contains no giant component while
when $p=\frac{1+\eps}{d^*}$, $G_p$ does contains a unique giant component.
\cite{Chung}
\subsection{Degree Sequence}
For given $d_1,\ldots,d_n$ we consider the graph
on $n$ vertices chosen uniformly
amongst all with that degree sequence, that is,
$v$ having precisely $d_v$ neighbors. Suppose for each $n$
we have a
degree sequence, $\lam_i(n)\sim \lam_in$ vertices having
degree $i$. Then (with $d^*$ from above),
$d^*\sim [\sum i^2\lam_i]/[\sum i\lam_i]$.
Set $Q:= \sum_i i(i-2)\lam_i$ so that $Q >0$ if and only if
$d^* > 2$. In analyzing BFS a new edge will have a new vertex
with expected degree $d^*$ and an expected $d^*-1$ edges to
new vertices. With $d^*<2$ the process will die while
for $d^*>2$ it might continue. When $Q<0$ the
random graph with this degree sequence has no giant component
while when when $Q > 0$ it does. \cite{MR}
\par
Set $Q_n:= \sum_i i(i-2)\lam_i(n)$ and assume $Q_n\ra 0$.
Under moderate assumptions, $Q_n=\lam n^{-1/3}$ provides
a critical window.
For $\lam \ra -\infty$ the
random graph is
subcritical and all components have size $o(n^{2/3})$.
When $\lam \ra +\infty$ the random graph is
supercritical, there is a dominant component of size
$\gg n^{2/3}$ and all other components has size $o(n^{2/3})$.
In the power law random graphs, thought by many to model the
web graph and other phenomenon, it is assummed that $\lam_i
\sim i^{-\gam}$ for a constant $\gam$. For certain $\gam$
the above critical window does not work, and work in progress
indicates that there is a critical window whose exponent
depends on $\gam$.
\cite{KS} \cite{JansonLuczak}
\subsection{A Potts Model}
In the Potts Model, the distribution of graphs is biased toward
having more components. There are three parameters, $p\in [0,1]$,
$q\geq 1$, and the number of vertices $n$. A graph $G$ with
$e$ edges, $s:={n\choose 2}-e$ nonedges, and $c$ components
has probability $p^e(1-p)^sq^c/Z$ where $Z$ is a normalizing
constant chosen so that the sum of the probabilities is one.
For $q=2$ this is called the Ising model, for $q\geq 3$ and
integral, this is the Potts model. For $2