Processing Estimation Results

This section shows how to obtain and process estimation results.

[1]:
# only necessary if you run this in a jupyter notebook
%matplotlib inline

import matplotlib.pyplot as plt
# import the base class:
from pydsge import *
# import all the useful stuff from grgrlib:
from grgrlib import *

Loading and printing stats

The meta data on estimation results is stored in the numpy-fileformat *.npz and is by default suffixed by the tag _meta. An example is included with the package. Lets load it…

[3]:
print(meta_data)
mod = DSGE.load(meta_data)
/home/gboehl/repos/pydsge/pydsge/examples/dfi_doc0_meta.npz

As before, the mod object now collects all information and methods for the estimated model. That means you can do all the stuff that you could to before, like running irfs or the filter. It also stores some info on the estimation:

[4]:
info = mod.info()
Title: dfi_doc0
Date: 2021-01-17 17:21:10.061171
Description: dfi, crisis sample
Parameters: 11
Chains: 40
Last 200 of 1405 samples

The mod object provides access to the estimation stats:

[5]:
summary = mod.mcmc_summary()
            distribution  pst_mean  sd/df   mean     sd   mode  hpd_5  hpd_95  \
theta               beta     0.500  0.100  0.914  0.142  0.978  0.655   0.986
sigma             normal     1.500  0.375  1.936  1.007  2.302  1.591   2.933
phi_pi            normal     1.500  0.250  0.962  0.612  0.750  0.389   2.309
phi_y             normal     0.125  0.050  0.150  0.058  0.171  0.032   0.209
rho_u               beta     0.500  0.200  0.918  0.064  0.942  0.900   0.963
rho_r               beta     0.500  0.200  0.732  0.114  0.755  0.506   0.863
rho_z               beta     0.500  0.200  0.512  0.221  0.436  0.330   0.999
rho                 beta     0.750  0.100  0.735  0.049  0.715  0.657   0.807
sig_u   inv_gamma_dynare     0.100  2.000  0.233  0.080  0.209  0.121   0.318
sig_r   inv_gamma_dynare     0.100  2.000  0.090  0.023  0.098  0.062   0.106
sig_z   inv_gamma_dynare     0.100  2.000  0.148  0.090  0.103  0.083   0.266

        mc_error
theta      0.010
sigma      0.070
phi_pi     0.042
phi_y      0.004
rho_u      0.004
rho_r      0.007
rho_z      0.015
rho        0.002
sig_u      0.005
sig_r      0.002
sig_z      0.006
Marginal data density:        -26.2152
Mean acceptance fraction:        0.173

The summary is a pandas.DataFrame object, so you can do fancy things with it like summary.to_latex(). Give it a try.

Posterior sampling

One major interest is of course to be able to sample from the posterior. Get a sample of 100 draws:

[6]:
pars = mod.get_par('posterior', nsamples=100, full=True)

Now, essentially everything is quite similar than before, when we sampled from the prior distribution. Let us run a batch of impulse response functions with these

[7]:
ir0 = mod.irfs(('e_r',1,0), pars)

# plot them:
v = ['y','Pi','r','x']
fig, ax, _ = pplot(ir0[0][...,mod.vix(v)], labels=v)
_images/reprocess_11_0.png

Note that you can also alter the parameters from the posterior. Lets assume you want to see what would happen if sigma is always one. Then you could create a parameter set like:

[8]:
pars_sig1 = [mod.set_par('sigma',1,p) for p in pars]

This is in particular interesting if you e.g. want to study the effects of structural monetary policy. We can also extract the smoothened shocks to do some more interesting exercises. But before that, we have to load the filter used during the estimation:

[9]:
# load filter:
mod.load_estim()
# extract shocks:
epsd = mod.extract(pars, nsamples=1, bound_sigma=4)
[estimation:]   Model operational. 12 states, 3 observables, 81 data points.
[extract:]      Extraction requires filter in non-reduced form. Recreating filter instance.
100%|██████████| 100/100 [00:26<00:00,  3.84 sample(s)/s]

epsd is a dictionary containing the smoothed means, smoothened observables, the respective shocks and the parameters used for that, as explained in the previous section. Now that we have the shocks, we can again do a historical decomposition or run counterfactual experiments. The bound_sigma parameter adjusts the range in which the CMAES algoritm searches for a the set of shocks to fit the time series (in terms of shock standard deviations). A good model combined with a data set without strong irregularities (such as financial crises) should not need a high bound_sigma. The default value is to search within 4 shock standard deviations.