Relevance Reasoning with pyAgrum¶
Relevance reasoning is the analysis of the influence of evidence on a Bayesian network.
In this notebook we will explain what is relevance reasoning and how to do it using pyAgrum.
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
import time
import os
%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt
Multiple inference¶
In the well known 'alarm' BN, how to analyze the influence on 'VENTALV' of a soft evidence on 'MINVOLSET' ?
bn=gum.loadBN("res/alarm.dsl")
gnb.showBN(bn,size="6")
We propose to draw the plot of the posterior of 'VENTALV' for the evidence : $$\forall x \in [0,1], e_{MINVOLSET}=[0,x,0.5]$$
To do so, we perform a large number of inference and plot the posteriors.
K=1000
r=range(0,K)
xs=[x/K for x in r]
def getPlot(xs,ys,K,duration):
p=plot(xs,ys)
legend(p,[bn['VENTALV'].label(i) for i in range(bn['VENTALV'].domainSize())],loc=7);
title('VENTALV ({} inferences in {:5.3} s)'.format(K,duration));
ylabel('posterior Probability');
xlabel('Evidence on MINVOLSET : [0,x,0.5]');
First try : classical lazy propagation¶
tf=time.time()
ys=[]
for x in r:
ie=gum.LazyPropagation(bn)
ie.setNumberOfThreads(1) # to be fair, we avoid multithreaded inference
ie.addEvidence('MINVOLSET',[0,x/K,0.5])
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta1=time.time()-tf
getPlot(xs,ys,K,delta1)
The title of the figure above gives the time for those 1000 inference.
Second try : classical variable elimination¶
One can note that we just need one posterior. This is a case where VariableElimination
should give better results.
tf=time.time()
ys=[]
for x in r:
ie=gum.VariableElimination(bn)
ie.addEvidence('MINVOLSET',[0,x/K,0.5])
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta2=time.time()-tf
getPlot(xs,ys,K,delta2)
pyAgrum give us a function gum.getPosterior
to do this same job more easily.
tf=time.time()
ys=[gum.getPosterior(bn,evs={'MINVOLSET':[0,x/K,0.5]},target='VENTALV').tolist()
for x in r]
getPlot(xs,ys,K,time.time()-tf)
Last try : optimized Lazy propagation with relevance reasoning and incremental inference¶
Optimized inference in aGrUM can use the targets and the evidence to optimize the computations. This is called relevance reasonning.
Moreover, if the values of the evidence change but not the structure of the query (same nodes as target, same nodes as hard evidence, same nodes as soft evidence), inference in aGrUM may re-use some of the computations from a query to another. This is called incremental inference.
tf=time.time()
ie=gum.LazyPropagation(bn)
ie.setNumberOfThreads(1) # to be fair, we avoid multithreaded inference
ie.addEvidence('MINVOLSET',[1,1,1])
ie.addTarget('VENTALV')
ys=[]
for x in r:
ie.chgEvidence('MINVOLSET',[0,x/K,0.5])
ie.makeInference()
ys.append(ie.posterior('VENTALV').tolist())
delta3=time.time()-tf
getPlot(xs,ys,K,delta3)
print("Mean duration of a lazy propagation : {:5.3f}ms".format(1000*delta1/K))
print("Mean duration of a variable elimination : {:5.3f}ms".format(1000*delta2/K))
print("Mean duration of an optimized lazy propagation : {:5.3f}ms".format(1000*delta3/K))
Mean duration of a lazy propagation : 16.817ms Mean duration of a variable elimination : 1.504ms Mean duration of an optimized lazy propagation : 1.465ms
How it works¶
bn=gum.fastBN("Y->X->T1;Z2->X;Z1->X;Z1->T1;Z1->Z3->T2")
ie=gum.LazyPropagation(bn)
gnb.flow.row(bn,bn.cpt("X"),gnb.getJunctionTree(bn),gnb.getJunctionTreeMap(bn,size="3!"),
captions=["BN","potential","Junction Tree","The map"])
|
| |||
---|---|---|---|---|
|
| 0.4201 | 0.5799 | |
0.4808 | 0.5192 | |||
| 0.1593 | 0.8407 | ||
0.4072 | 0.5928 | |||
|
| 0.7828 | 0.2172 | |
0.1490 | 0.8510 | |||
| 0.1453 | 0.8547 | ||
0.7630 | 0.2370 |
aGrUM/pyAgrum uses as much as possible techniques of relevance reasonning to reduce the complexity of the inference.
ie.setEvidence({"X":0})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X","the map"])
ie.updateEvidence({"X":[0.1,0.9]})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for soft evidence on X","the map"])
ie.updateEvidence({"Y":0,"X":0,3:[0.1,0.9],"Z1":[0.4,0.6]})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X and Y, soft on Z2 and Z1","the map"])
ie.setEvidence({"X":0})
ie.setTargets({"T1","Z1"})
gnb.sideBySide(ie,gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X and targets T1,Z1","the map"])
ie.updateEvidence({"Y":0,"X":0,3:[0.1,0.9],"Z1":[0.4,0.6]})
ie.addJointTarget({"Z2","Z1","T1"})
gnb.sideBySide(ie,
gnb.getDot(ie.joinTree().toDotWithNames(bn)),ie.joinTree().map(),
captions=["","Join tree optimized for hard evidence on X and targets T1,Z1","the map"])
ie.makeInference()
ie.jointPosterior({"Z2","Z1","T1"})
|
| ||
---|---|---|---|
| 0.1508 | 0.5657 | |
0.1072 | 0.1760 | ||
| 0.0001 | 0.0001 | |
0.0001 | 0.0000 |
ie.jointPosterior({"Z2","Z1"})
|
| |
---|---|---|
0.2579 | 0.7417 | |
0.0001 | 0.0002 |
# this will not work
try:
ie.jointPosterior({"Z3","Z1"})
except gum.UndefinedElement:
print("Indeed, there is no joint target which contains {4,5} !")
Indeed, there is no joint target which contains {4,5} !
ie.addJointTarget({"Z2","Z1"})
gnb.sideBySide(ie,
gnb.getDot(ie.joinTree().toDotWithNames(bn)),
captions=['','JoinTree'])