There are several approximate inference for BN in aGrUM (pyAgrum). They share the same API than exact inference.
Loopy Belief Propagation : LBP is an approximate inference that uses exact calculous methods (when the BN os a tree) even if the BN is not a tree. LBP is a special case of inference : the algorithm may not converge and even if it converges, it may converge to anything (but the exact posterior). LBP however is fast and usually gives not so bad results.
Sampling inference : Sampling inference use sampling to compute the posterior. The sampling may be (very) slow but those algorithms converge to the exac distribution. aGrUM implements :
Montecarlo sampling,
Weighted sampling,
Importance sampling,
Gibbs sampling.
Finally, aGrUM propose the so-called 'loopy version' of the sampling algorithms : the idea is to use LBP as a Dirichlet prior for the sampling algorithm. A loopy version of each sampling algorithm is proposed.
In [1]:
importos%matplotlib inline
frompylabimport*importmatplotlib.pyplotaspltdefunsharpen(bn):""" Force the parameters of the BN not to be a bit more far from 0 or 1 """fornodinbn.nodes():bn.cpt(nod).translate(bn.maxParam()/10).normalizeAsCPT()defcompareInference(ie,ie2,ax=None):""" compare 2 inference by plotting all the points from (posterior(ie),posterior(ie2)) """exact=[]appro=[]errmax=0fornodeinbn.nodes():# potentials as listexact+=ie.posterior(node).tolist()appro+=ie2.posterior(node).tolist()errmax=max(errmax,(ie.posterior(node)-ie2.posterior(node)).abs().max())iferrmax<1e-10:errmax=0ifax==None:fig=plt.Figure(figsize=(4,4))ax=plt.gca()# default axis for pltax.plot(exact,appro,'ro')ax.set_title("{} vs {}\n{}\nMax error {:2.4} in {:2.4} seconds".format(str(type(ie)).split(".")[2].split("_")[0][0:-2],# name of first inferencestr(type(ie2)).split(".")[2].split("_")[0][0:-2],# name of second inferenceie2.messageApproximationScheme(),errmax,ie2.currentTime()))
ie4=gum.ImportanceSampling(bn)ie4.setEpsilon(10**-1.8)ie4.setMaxTime(10)#10 seconds for inferenceie4.setPeriodSize(300)ie4.makeInference()compareInference(ie,ie4)
Every sampling inference has a 'hybrid' version which consists in using a first loopy belief inference as a prior for the probability estimations by sampling.
In [12]:
ie3=gum.LoopyGibbsSampling(bn)ie3.setEpsilon(10**-1.8)ie3.setMaxTime(10)#10 seconds for inferenceie3.setPeriodSize(300)ie3.makeInference()compareInference(ie,ie3)
defcompareAllInference(bn,evs={},epsilon=10**-1.6,epsilonRate=1e-8,maxTime=20):ies=[gum.LazyPropagation(bn),gum.LoopyBeliefPropagation(bn),gum.GibbsSampling(bn),gum.LoopyGibbsSampling(bn),gum.WeightedSampling(bn),gum.LoopyWeightedSampling(bn),gum.ImportanceSampling(bn),gum.LoopyImportanceSampling(bn)]# burn in for Gibbs samplingsforiin[2,3]:ies[i].setBurnIn(300)ies[i].setDrawnAtRandom(True)foriinrange(2,len(ies)):ies[i].setEpsilon(epsilon)ies[i].setMinEpsilonRate(epsilonRate)ies[i].setPeriodSize(300)ies[i].setMaxTime(maxTime)foriinrange(len(ies)):ies[i].setEvidence(evs)ies[i].makeInference()fig,axes=plt.subplots(1,len(ies)-1,figsize=(35,3),num='gpplot')foriinrange(len(ies)-1):compareInference(ies[0],ies[i+1],axes[i])