Abstract / Description of output
Exact
inference
in
densely
connected
Bayesian
networks
is
computation-
ally
intractable,
and
so
there
is
considerable
interest
in
developing
effective
approximation
schemes.
One
approach
which
has
been
adopted
is
to
bound
the
log
likelihood
using
a
mean-field
approximating
distribution.
While
this
leads
to
a
tractable
algorithm,
the
mean
field
distribution
is
assumed
to
be
factorial
and
hence
unimodal.
In
this
paper
we
demonstrate
the
feasibility
of
using
a
richer
class
of
approximating
distributions
based
on
mixtures
of
mean
field
distributions.
We
derive
an
efficient
algorithm
for
updating
the
mixture
parameters
and
apply
it
to
the
problem
of
learning
in
sigmoid
belief
networks.
Our
results
demonstrate
a
systematic
improvement
over
simple
mean
field
theory
as
the
number
of
mixture
components
is
increased.
Original language | English |
---|---|
Title of host publication | Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems 10 |
Publisher | MIT Press |
Pages | 416-422 |
Number of pages | 7 |
ISBN (Print) | 0-262-10076-2 |
Publication status | Published - 1998 |