Asset Allocation example - continued (part 02)

In our last post on Explainable Models: Asset Allocation Example, we separated our asset
allocation problem into two parts:

A) Estimating future asset performance and correlation, based on model assumptions and available data

B) Choosing best portfolio allocation, based on investor preferences and the estimates from A

We define these challenges as being entirely separate, when facilitated by our careful design of
the data flowing out of estimation process A.   This clear separation provides practical benefits  
to teams and systems undertaking either challenge.  

We consider A to be primarily a scientific endeavor, albeit one involving some prognosticator's
discretion and potential for folly.  Generally this estimation will rely in large part on commonly available data, and be comparable with  estimates provided by alternative sources.  Improvement of these estimates may be pursued in an objective manner.  

Meanwhile B involves idiosyncratic decisions by investors, which are often not  meaningfully comparable.   Allocations B may be derived directly from estimates A using familiar gaussian portfolio criteria such as Sharpe Ratio and Efficient Frontier.  However we note that estimates A are not necessarily gaussian in character, and B might instead use a radically different approach to evaluation of a portfolio's potential reward versus risk, e.g. "minimum expected drawdown vs. untradable benchmark asset".  We will not explore these approaches to allocation decision B further in this blog series, but we claim to have properly set the table for  them through our joint PD estimation approach.

Our estimates describe joint probability distributions over asset outcomes defined using one K-tuple
of random variables per asset, most commonly a pair (K=2) of (returnMeasure, riskMeasure).  
Generally returnMeasures are straightforward, whereas riskMeasures must be considered more 
carefully in the context of the embedding model, which ranges over possible worlds (i.e. sample space).  

For a given list of N assets, over a time horizon from T_a to T_b of duration Dur_ab, 
in a given scenario S governed by assumptions A_s_1, A_s_2 ...
we consider a joint distribution P_ab_s over the N * K random variables,
where a sample vector with K=2 resembles:

  ((rtrnMeas_01, riskMeas_01), (rtrnMeas_02, riskMeas_02) ... (rtrnMeas_N, riskMeas_N))

We may rewrite this random N-vector of K-tuples as a random matrix of  N rows by K columns.
We further formalize using row-index n ranging from 1 to N, and col-index k rangng from 1 to K.
One full matrix value is then a point in our N x K dimensional sample space, upon which
our probability distribution may be modeled as either:

     - a pointwise probability density, which we presume to be piecewise-smooth and bounded.

     - a finite set of sample-space partitions, with assignment of a discrete probability for each

Then our immediate constraint is that the continuous integral or discrete sum of probabilities over
the sample space must add up to exactly 1.  The exactness of this requirement provides
a verifiable constraint on our methods of modeling and computation.

This joint distribution for an N x K matrix is the essential form of information conveyed out
from (any variety of) A, consumed by (any variety of) B.  

Explanation of these distribution estimates is entirely the responsibility of the estimating system A.

Henceforth, we focus entirely on the estimation and explanation problem A, with the understanding
that we are motivated by a need to provide usable estimates (in the form of joint distributions over
the N x K dimensional sample space, not mere summary statistics) to some unspecified set of
downstream decision procedures, B.    Our measure of success in meeting challenge A is determined
by the clarity, predictive power and biases of the estimates produced, and the computational
properties of the estimation + explanation processes.   These computational properties include
the scope and granularity of input fact data required, and the parallel implementation potential
of the estimation algorithm.

Continue on to Composed stochastic proof : joint distribution monad as dependent type.

Popular posts from this blog

Streaming analytics in Scala using ZIO and Apache DataSketches (part 1)

Overview of AxLam model : Nouns, Types, Morphisms

AxioMagic under Spark on cloud (AWS, Azure, ...)