Using pyAgrum¶
%matplotlib inline
from pylab import *
import matplotlib.pyplot as plt
import os
Initialisation¶
- importing pyAgrum
- importing pyAgrum.lib tools
- loading a BN
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
gnb.configuration()
Library | Version |
---|---|
OS | posix [darwin] |
Python | 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] |
IPython | 8.22.2 |
Matplotlib | 3.8.3 |
Numpy | 1.26.4 |
pyDot | 2.0.0 |
pyAgrum | 1.12.1.9 |
bn=gum.loadBN("res/alarm.dsl")
gnb.showBN(bn,size='9')
Visualisation and inspection¶
print(bn['SHUNT'])
SHUNT:Labelized({NORMAL|HIGH})
print(bn.cpt(bn.idFromName('SHUNT')))
|| SHUNT | PULMEM|INTUBA||NORMAL |HIGH | ------|------||---------|---------| TRUE |NORMAL|| 0.1000 | 0.9000 | FALSE |NORMAL|| 0.9500 | 0.0500 | TRUE |ESOPHA|| 0.1000 | 0.9000 | FALSE |ESOPHA|| 0.9500 | 0.0500 | TRUE |ONESID|| 0.0100 | 0.9900 | FALSE |ONESID|| 0.0500 | 0.9500 |
gnb.showPotential(bn.cpt(bn.idFromName('SHUNT')),digits=3)
|
| ||
---|---|---|---|
| 0.100 | 0.900 | |
0.950 | 0.050 | ||
| 0.100 | 0.900 | |
0.950 | 0.050 | ||
| 0.010 | 0.990 | |
0.050 | 0.950 |
Results of inference¶
It is easy to look at result of inference
gnb.showPosterior(bn,{'SHUNT':'HIGH'},'PRESS')
gnb.showPosterior(bn,{'MINVOLSET':'NORMAL'},'VENTALV')
Overall results
gnb.showInference(bn,size="10")
What is the impact of observed variables (SHUNT and VENTALV for instance) on another on (PRESS) ?
ie=gum.LazyPropagation(bn)
ie.evidenceImpact('PRESS',['SHUNT','VENTALV'])
|
|
|
| ||
---|---|---|---|---|---|
| 0.0569 | 0.2669 | 0.2005 | 0.4757 | |
0.0208 | 0.2515 | 0.0553 | 0.6724 | ||
0.0769 | 0.3267 | 0.1772 | 0.4192 | ||
0.0501 | 0.1633 | 0.2796 | 0.5071 | ||
| 0.0589 | 0.2726 | 0.1997 | 0.4688 | |
0.0318 | 0.2237 | 0.0521 | 0.6924 | ||
0.1735 | 0.5839 | 0.1402 | 0.1024 | ||
0.0711 | 0.2347 | 0.2533 | 0.4410 |
Using inference as a function¶
It is also easy to use inference as a routine in more complex procedures.
import time
r=range(0,100)
xs=[x/100.0 for x in r]
tf=time.time()
ys=[gum.getPosterior(bn,evs={'MINVOLSET':[0,x/100.0,0.5]},target='VENTALV').tolist()
for x in r]
delta=time.time()-tf
p=plot(xs,ys)
legend(p,[bn['VENTALV'].label(i)
for i in range(bn['VENTALV'].domainSize())],loc=7);
title('VENTALV (100 inferences in %d ms)'%delta);
ylabel('posterior Probability');
xlabel('Evidence on MINVOLSET : [0,x,0.5]')
plt.show()
Another example : python gives access to a large set of tools. Here the value for the equality of two probabilities of a posterior is easely computed.
x=[p/100.0 for p in range(0,100)]
tf=time.time()
y=[gum.getPosterior(bn,evs={'HRBP':[1.0-p/100.0,1.0-p/100.0,p/100.0]},target='TPR').tolist()
for p in range(0,100)]
delta=time.time()-tf
p=plot(x,y)
title('HRBP (100 inferences in %d ms)'%delta);
v=bn['TPR']
legend([v.label(i) for i in range(v.domainSize())],loc='best');
np1=(transpose(y)[0]>transpose(y)[2]).argmin()
text(x[np1]-0.05,y[np1][0]+0.005,str(x[np1]),bbox=dict(facecolor='red', alpha=0.1))
plt.show()
BN as a classifier¶
Generation of databases¶
Using the CSV format for the database:
print(f"The log2-likelihood of the generated base : {gum.generateSample(bn,1000,'out/test.csv',with_labels=True):.2f}")
The log2-likelihood of the generated base : -15310.52
with open("out/test.csv","r") as src:
for _ in range(10):
print(src.readline(),end="")
HR,CATECHOL,FIO2,HYPOVOLEMIA,VENTMACH,DISCONNECT,ARTCO2,VENTALV,PVSAT,EXPCO2,HRSAT,MINVOL,INTUBATION,STROKEVOLUME,LVFAILURE,VENTLUNG,TPR,PULMEMBOLUS,ANAPHYLAXIS,SAO2,VENTTUBE,PRESS,HISTORY,KINKEDTUBE,HRBP,MINVOLSET,PCWP,CO,PAP,LVEDVOLUME,BP,ERRLOWOUTPUT,CVP,SHUNT,INSUFFANESTH,HREKG,ERRCAUTER HIGH,HIGH,NORMAL,FALSE,NORMAL,FALSE,HIGH,ZERO,LOW,LOW,HIGH,ZERO,NORMAL,NORMAL,FALSE,ZERO,LOW,FALSE,FALSE,LOW,LOW,HIGH,FALSE,FALSE,HIGH,NORMAL,NORMAL,HIGH,NORMAL,NORMAL,LOW,FALSE,NORMAL,NORMAL,FALSE,HIGH,FALSE NORMAL,NORMAL,NORMAL,FALSE,NORMAL,FALSE,HIGH,LOW,NORMAL,LOW,LOW,ZERO,NORMAL,NORMAL,FALSE,ZERO,NORMAL,FALSE,FALSE,HIGH,ZERO,ZERO,FALSE,FALSE,LOW,NORMAL,NORMAL,NORMAL,NORMAL,NORMAL,NORMAL,FALSE,NORMAL,HIGH,FALSE,NORMAL,FALSE HIGH,HIGH,NORMAL,FALSE,NORMAL,FALSE,HIGH,ZERO,LOW,LOW,HIGH,ZERO,NORMAL,NORMAL,FALSE,ZERO,NORMAL,FALSE,FALSE,LOW,LOW,HIGH,FALSE,FALSE,NORMAL,NORMAL,NORMAL,HIGH,LOW,NORMAL,HIGH,TRUE,NORMAL,NORMAL,FALSE,HIGH,FALSE HIGH,HIGH,NORMAL,FALSE,NORMAL,FALSE,HIGH,ZERO,LOW,LOW,HIGH,NORMAL,NORMAL,LOW,FALSE,ZERO,HIGH,FALSE,FALSE,HIGH,LOW,HIGH,FALSE,FALSE,HIGH,NORMAL,NORMAL,LOW,NORMAL,NORMAL,LOW,FALSE,NORMAL,NORMAL,TRUE,HIGH,FALSE HIGH,HIGH,NORMAL,FALSE,NORMAL,FALSE,HIGH,ZERO,LOW,LOW,HIGH,ZERO,NORMAL,NORMAL,FALSE,ZERO,HIGH,FALSE,FALSE,LOW,LOW,HIGH,FALSE,FALSE,HIGH,NORMAL,NORMAL,HIGH,NORMAL,NORMAL,HIGH,FALSE,NORMAL,NORMAL,FALSE,HIGH,FALSE HIGH,HIGH,NORMAL,FALSE,NORMAL,FALSE,HIGH,ZERO,LOW,LOW,HIGH,ZERO,NORMAL,NORMAL,FALSE,ZERO,LOW,FALSE,FALSE,LOW,LOW,HIGH,FALSE,FALSE,HIGH,NORMAL,NORMAL,HIGH,NORMAL,NORMAL,NORMAL,FALSE,NORMAL,NORMAL,TRUE,HIGH,FALSE HIGH,HIGH,NORMAL,FALSE,LOW,FALSE,HIGH,ZERO,LOW,NORMAL,NORMAL,ZERO,NORMAL,NORMAL,FALSE,ZERO,HIGH,FALSE,FALSE,LOW,ZERO,LOW,FALSE,FALSE,HIGH,LOW,NORMAL,HIGH,NORMAL,NORMAL,HIGH,FALSE,NORMAL,NORMAL,TRUE,LOW,TRUE HIGH,HIGH,NORMAL,FALSE,NORMAL,TRUE,HIGH,ZERO,LOW,LOW,HIGH,ZERO,NORMAL,NORMAL,FALSE,ZERO,HIGH,FALSE,FALSE,LOW,ZERO,NORMAL,FALSE,FALSE,HIGH,NORMAL,NORMAL,HIGH,NORMAL,NORMAL,HIGH,FALSE,NORMAL,NORMAL,FALSE,HIGH,FALSE HIGH,HIGH,NORMAL,FALSE,NORMAL,FALSE,HIGH,ZERO,LOW,LOW,HIGH,ZERO,NORMAL,NORMAL,FALSE,ZERO,LOW,FALSE,FALSE,LOW,LOW,NORMAL,FALSE,FALSE,HIGH,NORMAL,NORMAL,HIGH,NORMAL,NORMAL,NORMAL,FALSE,NORMAL,NORMAL,FALSE,HIGH,FALSE
probabilistic classifier using BN¶
(because of the use of from-bn-generated csv files, quite good ROC curves are expected)
from pyAgrum.lib.bn2roc import showROC_PR
showROC_PR(bn,"out/test.csv",
target='CATECHOL',label='HIGH', # class and label
show_progress=True,show_fig=True,with_labels=True)
out/test.csv: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
(0.959215863001352, 0.9643336302500001, 0.9978234596032383, 0.11514254295)
Using another class variable
showROC_PR(bn,"out/test.csv",'SAO2','HIGH',show_progress=True)
out/test.csv: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
(0.9525255102040817, 0.0052263681999999995, 0.657214331912374, 0.1112440184)
Fast prototyping for BNs¶
bn1=gum.fastBN("a->b;a->c;b->c;c->d",3)
gnb.sideBySide(*[gnb.getInference(bn1,evs={'c':val},targets={'a','c','d'}) for val in range(3)],
captions=[f"Inference given that $c={val}$" for val in range(3)])
print(gum.getPosterior(bn1,evs={'c':0},target='c'))
print(gum.getPosterior(bn1,evs={'c':0},target='d'))
# using pyagrum.lib.notebook's helpers
gnb.flow.row(gum.getPosterior(bn1,evs={'c':0},target='c'),gum.getPosterior(bn1,evs={'c':0},target='d'))
c | 0 |1 |2 | ---------|---------|---------| 1.0000 | 0.0000 | 0.0000 | d | 0 |1 |2 | ---------|---------|---------| 0.0832 | 0.4212 | 0.4956 |
|
|
|
---|---|---|
1.0000 | 0.0000 | 0.0000 |
|
|
|
---|---|---|
0.0832 | 0.4212 | 0.4956 |
Joint posterior, impact of multiple evidence¶
bn=gum.fastBN("a->b->c->d;b->e->d->f;g->c")
gnb.sideBySide(bn,gnb.getInference(bn))
ie=gum.LazyPropagation(bn)
ie.addJointTarget({"e","f","g"})
ie.makeInference()
gnb.sideBySide(ie.jointPosterior({"e","f","g"}),ie.jointPosterior({"e","g"}),
captions=["Joint posterior $P(e,f,g)$","Joint posterior $P(e,f)$"])
gnb.sideBySide(ie.evidenceImpact("a",["e","f"]),ie.evidenceImpact("a",["d","e","f"]),
captions=["$\\forall e,f, P(a|e,f)$",
"$\\forall d,e,f, P(a|d,e,f)=P(a|d,e)$ using d-separation"]
)
gnb.sideBySide(ie.evidenceJointImpact(["a","b"],["e","f"]),ie.evidenceJointImpact(["a","b"],["d","e","f"]),
captions=["$\\forall e,f, P(a,b|e,f)$",
"$\\forall d,e,f, P(a,b|d,e,f)=P(a,b|d,e)$ using d-separation"]
)
Most Probable Explanation¶
The Most Probable Explanation (MPE) is a concept commonly used in the field of probabilistic reasoning and Bayesian statistics. It refers to the set of values for the variables in a given probabilistic model that is the most consistent with (that maximizes the likelihood of) the observed evidence. Essentially, it represents the most likely scenario or explanation given the available evidenceand the underlying probabilistic model.
ie=gum.LazyPropagation(bn)
print(ie.mpe())
<d:0|e:0|c:0|b:1|a:0|g:1|f:0>
evs={"e":0,"g":0}
ie.setEvidence(evs)
vals=ie.mpeLog2Posterior()
print(f"The most probable explanation for observation {evs} is the configuration {vals.first} for a log probability of {vals.second:.6f}")
The most probable explanation for observation {'e': 0, 'g': 0} is the configuration <g:0|e:0|d:0|f:0|c:1|b:1|a:0> for a log probability of -2.774139