Module Documentation#

All functionality of the package is implemented in the pydsge.DSGE class. An instance of this class holds all information about the model, and the respective methods.

class pydsge.DSGE(*kargs, **kwargs)#

Base class. Every model is an instance of the DSGE class and inherents its methods.

classmethod read(mfile, verbose=False)#

Read and parse a given *.yaml file.

Parameters:

mfile (str) – Path to the *.yaml file.

box_check(par=None)#

Check if parameterset lies outside the box constraints

Parameters:

par (array or list, optional) – The parameter set to check

clear() None.  Remove all items from D.#
copy() a shallow copy of D#
create_pool(ncores=None, threadpool_limit=None)#

Creates a reusable pool

Parameters:
  • ncores (int, optional) – Number of cores. Defaults to pathos’ default, which is the number of cores.

  • threadpool_limit (int, optional) – Number of threads that numpy uses independently of pathos. Only used if threadpoolctl is installed. Defaults to one.

extract(sample=None, nsamples=1, init_cov=None, precalc=True, seed=0, nattemps=4, accept_failure=False, verbose=True, debug=False, l_max=None, k_max=None, **npasargs)#

Extract the timeseries of (smoothed) shocks.

Parameters:
  • sample (array, optional) – Provide one or several parameter vectors used for which the smoothed shocks are calculated (default is the current self.par)

  • nsamples (int, optional) – Number of npas-draws for each element in sample. Defaults to 1

  • nattemps (int, optional) – Number of attemps per sample to crunch the sample with a different seed. Defaults to 4

Returns:

The result(s)

Return type:

tuple

fromkeys(value=None, /)#

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)#

Return the value for key if key is in the dictionary, else default.

get_cov(npar=None, **args)#

get the covariance matrix

get_data(df, start=None, end=None)#

Load and prepare data … This function takes a provided pandas.DataFrame, reads out the observables as they are defined in the YAML-file, and ajusts it regarding the start and end keywords. Using a pandas.DatetimeIndex as index of the DataFrame is strongly encuraged as it can be very powerful, but not necessary.

Parameters:
  • df (pandas.DataFrame) –

  • start (index (optional)) –

  • end (index (optional)) –

Return type:

pandas.DataFrame

get_eps_lin(x, xp, rcond=1e-14)#

Get filter-implied (smoothed) shocks for linear model

get_log_prob(**args)#

Get the log likelihoods in the chain

get_par(dummy=None, npar=None, asdict=False, full=True, nsamples=1, verbose=False, roundto=5, debug=False, **args)#

Get parameters. Tries to figure out what you want.

Parameters:
  • dummy (str, optional) –

    Can be None, a parameter name, a parameter set out of {‘calib’, ‘init’, ‘prior_mean’, ‘best’, ‘mode’, ‘mcmc_mode’, ‘post_mean’, ‘posterior_mean’} or one of {‘prior’, ‘post’, ‘posterior’}.

    If None, returns the current parameters (default). If there are no current parameters, this defaults to ‘best’. ‘calib’ will return the calibration in the main body of the yaml-file (parameters). ‘init’ are the initial values (first column) in the prior section of the yaml-file. ‘mode’ is the highest known mode from any sort of parameter estimation. ‘best’ will default to ‘mode’ if it exists and otherwise fall back to ‘init’. ‘posterior_mean’ and ‘post_mean’ are the same thing. ‘posterior_mode’, ‘post_mode’ and ‘mcmc_mode’ are the same thing. ‘prior’ or ‘post’/’posterior’ will draw random samples. Obviously, ‘posterior’, ‘mode’ etc are only available if a posterior/chain exists.

    NOTE: calling get_par with a set of parameters is the only way to recover the calibrated parameters that are not included in the prior (if you have changed them). All other options will work incrementially on (potential) previous edits of these parameters.

  • asdict (bool, optional) – Returns a dict of the values if True and an array otherwise (default is False).

  • full (bool, optional) – Whether to return all parameters or the estimated ones only. (default: True)

  • nsamples (int, optional) – Size of the sample. Defaults to 1

  • verbose (bool, optional) – Print additional output infmormation (default is False)

  • roundto (int, optional) – Rounding of additional output if verbose, defaults to 5

  • args (various, optional) – Auxilliary arguments passed to gen_sys calls

Returns:

Numpy array of parameters or dict of parameters

Return type:

array or dict

get_sample(size, chain=None)#

Get a (preferably recent) sample from the chain

gfevd(eps_dict, horizon=1, nsamples=None, linear=False, seed=0, verbose=True, **args)#

Calculates the generalized forecasting error variance decomposition (GFEVD, Lanne & Nyberg)

Parameters:
  • eps (array or dict) –

  • nsamples (int, optional) – Sample size. Defaults to everything exposed to the function.

  • verbose (bool, optional) –

irfs(shocklist, pars=None, state=None, T=30, linear=False, set_k=False, force_init_equil=None, verbose=True, debug=False, **args)#

Simulate impulse responses

Parameters:
  • shocklist (tuple or list of tuples) – Tuple of (shockname, size, period)

  • T (int, optional) – Simulation horizon. (default: 30)

  • linear (bool, optional) – Simulate linear model (default: False)

  • set_k (int, optional) – Enforce a k (defaults to False)

  • force_init_equil – If set to False, the equilibrium will be recalculated every iteration. This may be problematic if there is multiplicity because the algoritm selects the equilibrium with the lowest (l,k) (defaults to True)

  • verbose (bool or int, optional) – Level of verbosity (default: 1)

Returns:

The simulated series as a pandas.DataFrame object and the expected durations at the constraint

Return type:

DataFrame, tuple(int,int)

items() a set-like object providing a view on D's items#
keys() a set-like object providing a view on D's keys#
load_data(df, start=None, end=None)#

Load and prepare data … This function takes a provided pandas.DataFrame, reads out the observables as they are defined in the YAML-file, and ajusts it regarding the start and end keywords. Using a pandas.DatetimeIndex as index of the DataFrame is strongly encuraged as it can be very powerful, but not necessary.

Parameters:
  • df (pandas.DataFrame) –

  • start (index (optional)) –

  • end (index (optional)) –

Return type:

pandas.DataFrame

load_estim(N=None, linear=None, load_R=False, seed=None, dispatch=False, ncores=None, l_max=3, k_max=16, dry_run=True, use_prior_transform=None, verbose=True, debug=False, **filterargs)#

Initializes the tools necessary for estimation

Parameters:
  • N (int, optional) – Number of ensemble members for the TEnKF. Defaults to 300 if no previous information is available.

  • linear (bool, optional) – Whether a liniar or nonlinear filter is used. Defaults to False if no previous information is available.

  • load_R (bool, optional) – Whether to load filter.R from prevous information.

  • seed (bool, optional) – Random seed. Defaults to 0

  • dispatch (bool, optional) – Whether to use a dispatcher to create jitted transition and observation functions. Defaults to False.

  • verbose (bool/int, optional) –

    Whether display messages:

    0 - no messages 1 - duration 2 - duration & error messages 3 - duration, error messages & vectors 4 - maximum informative

load_rdict(path=None, suffix='')#

Load stored dictionary of results

The idea is to keep meta data (the model, setup, …) and the results obtained (chains, smoothed residuals, …) separate. save_rdict suggests some standard conventions.

mbcs_index(vd, verbose=True)#

This implements a main-business-cycle shock measure

Between 0 and 1, this indicates how well one single shock explains the business cycle dynamics

mcmc(p0=None, nsteps=3000, nwalks=None, tune=None, moves=None, temp=False, seed=None, backend=True, suffix=None, resume=False, append=False, verbose=True, **kwargs)#

Run the emcee ensemble MCMC sampler.

Parameters:
  • p0 (ndarray, optional) – Array of initial states of the walkers in the parameterspace

  • nsteps (int, optional) – Number of iterations. Defaults to 3000

  • nwalks (int, optional) – Number of walkers. At default tries to infer from p0

  • tune (int, optional) – Number of tuning periods

  • moves (emcee.moves object, optional) – Type of emcee sampler. Defaults to the DIME sampler

  • temp (float, optional) – Likelihood tempering. No tempering (temp=1) at default

  • seed (int, optional) – Random seed. Defaults to 0

  • backend (various, optional) – Optional backends for storing

  • suffix (string, optional) – Prefered suffix to the backend

  • resume (bool, optional) – Whether to resume an estimation from the backend

  • append (bool, optional) – Whether to append to the backend

  • verbose (bool, optional) – Degree of verbosity

mdd(method='laplace', chain=None, lprobs=None, tune=None, verbose=False, **args)#

Approximate the marginal data density.

Parameters:

method (str) – The method used for the approximation. Can be either of ‘laplace’, ‘mhm’ (modified harmonic mean) or ‘hess’ (LaPlace approximation with the numerical approximation of the hessian; NOT FUNCTIONAL).

nhd(eps_dict, linear=False, **args)#

Calculates the normalized historic decomposition, based on normalized counterfactuals

o_func(state, covs=None, pars=None)#

Get observables from state representation

Parameters:
  • state (array) –

  • covs (array, optional) – Series of covariance matrices. If provided, 95% intervals will be calculated, including the intervals of the states

obs(state, covs=None, pars=None)#

Get observables from state representation

Parameters:
  • state (array) –

  • covs (array, optional) – Series of covariance matrices. If provided, 95% intervals will be calculated, including the intervals of the states

oix(observables)#

Returns the indices of a list of observables

pop(k[, d]) v, remove specified key and return the corresponding value.#

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem()#

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

prep_estim(N=None, linear=None, load_R=False, seed=None, dispatch=False, ncores=None, l_max=3, k_max=16, dry_run=True, use_prior_transform=None, verbose=True, debug=False, **filterargs)#

Initializes the tools necessary for estimation

Parameters:
  • N (int, optional) – Number of ensemble members for the TEnKF. Defaults to 300 if no previous information is available.

  • linear (bool, optional) – Whether a liniar or nonlinear filter is used. Defaults to False if no previous information is available.

  • load_R (bool, optional) – Whether to load filter.R from prevous information.

  • seed (bool, optional) – Random seed. Defaults to 0

  • dispatch (bool, optional) – Whether to use a dispatcher to create jitted transition and observation functions. Defaults to False.

  • verbose (bool/int, optional) –

    Whether display messages:

    0 - no messages 1 - duration 2 - duration & error messages 3 - duration, error messages & vectors 4 - maximum informative

prior_sampler(nsamples, try_parameter=True, check_likelihood=False, verbose=True, debug=False, **kwargs)#

Draw parameters from prior.

Parameters:
  • nsamples (int) – Size of the prior sample

  • check_likelihood (bool, optional) – Whether to ensure that drawn parameters have a finite likelihood (False by default)

  • try_parameter (bool, optional) – Whether to ensure that drawn parameters have a model solution (True by default)

  • verbose (bool, optional) –

  • debug (bool, optional) –

Returns:

Numpy array of parameters

Return type:

array

save_rdict(rdict, path=None, suffix='', verbose=True)#

Save dictionary of results

The idea is to keep meta data (the model, setup, …) and the results obtained (chains, smoothed residuals, …) separate.

set_par(dummy=None, setpar=None, npar=None, verbose=False, return_vv=False, roundto=5, **args)#

Set the current parameter values.

In essence, this is a wrapper around get_par which also compiles the transition function with the desired parameters.

Parameters:
  • dummy (str or array, optional) – If an array, sets all parameters. If a string and a parameter name,`setpar` must be provided to define the value of this parameter. Otherwise, dummy is forwarded to get_par and the returning value(s) are set as parameters.

  • setpar (float, optional) – Parametervalue to be set. Of course, only if dummy is a parameter name.

  • npar (array, optional) – Vector of parameters. If given, this vector will be altered and returnd without recompiling the model. THIS WILL ALTER THE PARAMTER WITHOUT MAKING A COPY!

  • verbose (bool) – Whether to output more or less informative messages (defaults to False)

  • roundto (int) – Define output precision if output is verbose. (default: 5)

  • args (keyword args) – Keyword arguments forwarded to the gen_sys call.

setdefault(key, default=None, /)#

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

shock2state(shock)#

create state vector given shock and size

simulate(source=None, mask=None, pars=None, resid=None, init=None, operation=<ufunc 'multiply'>, linear=False, debug=False, verbose=False, **args)#

Simulate time series given a series of exogenous innovations.

Parameters:
  • source (dict) – Dict of extract results

  • mask (array) – Mask for eps. Each non-None element will be replaced.

t_func(state, shocks=None, set_k=None, return_flag=None, return_k=False, get_obs=False, linear=False, verbose=False)#

transition function

Parameters:
  • state (array) – full state in y-space

  • shocks (array, optional) – shock vector. If None, zero vector will be assumed (default)

  • set_k (tuple of int, optional) – set the expected number of periods if desired. Otherwise will be calculated endogenoulsy (default).

  • return_flag (bool, optional) – wheter to return error flags, defaults to True

  • return_k (bool, optional) – wheter to return values of (l,k), defaults to False

  • linear (bool, optional) – wheter to ignore the constraint and return the linear solution, defaults to False

  • verbose (bool or int, optional) – Level of verbosity, defaults to 0

update([E, ]**F) None.  Update D from dict/iterable E and F.#

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() an object providing a view on D's values#
vix(variables, dontfail=False)#

Returns the indices of a list of variables