not incorporate elements that may lose relevance can be viewed
as minimizing the amount of information that needs to be communicated to stakeholders over longer timeframes.
Relationship of Complexity With Utility Theory of
As an exercise, imagine if you were to give a customer, client,
or plan participant a computer program that assists them with
building and maintaining an adequate fund to self-insure for rare
events or to prepare for difficult-to-measure long-term needs.
Imagine the inputs the program would need and the outputs the
program would generate. Some specific questions could be asked
to characterize these elements:
■ ■ How often would they need to run the program in order to
■ ■ How much data would they need to input, and how frequently
should it be updated?
■ ■ In order to make the program effective, what kinds of data queries
to public data sources, private data sources, or “crowdsourced”
information would need to be included?
Essentially, this exercise is about identifying what it would
require to take all of the evaluations that an actuary performs and
transfer these tasks back to individuals. If you imagined that you
actually distributed this program to a broad population and you
recorded a log of the calculations and queries performed, how
compressible is this log? If the log is highly compressible, we might
assume that there is a great deal of redundant work being done by
all of these individuals. We should note that we cannot expect to
meaningfully eliminate redundancies that relate to outcome-rel-evant physical or financial events. However, all calculations and
queries for information are fair game.
From this perspective, the economic value generated by insurance and risk pooling mechanisms does not necessarily come
from differentials in risk preferences. Instead, the economic value
from these risk pooling mechanisms can come from removing
redundancy from the collective risk evaluations required by the
group and facilitating information collection needed for all of the
individuals. Pooling similar risks is more computationally efficient
than managing them individually.
A key offset to this improvement is that pooling the risks
changes the format of the solution that is needed, and additional
work is needed to assure various stakeholders that risks are being
managed appropriately—and if the risk is transferred to a for-profit
venture, additional work is needed to ensure that risks are being
managed profitably. In order for risk pooling mechanisms to be
economically viable, the improvements under aggregation have to
exceed the costs that are added.
Relationship of Complexity With Risk Management
Imagine that you have the task of selecting an investment policy
for a fund with relatively few constraints. You have the options of
investing primarily in Treasury securities, investing in corporate
bonds, investing in equities, or possibly even permitting a more
complex investment policy that may contemplate derivatives and
OTC contracts. It is possible to write risk controls for any of these
policies, but it is worth noting that it is less complex to verify that
risk controls are being followed for the lower-risk options.
An investment policy that consists of a set of whitelisted se-
curities requires evaluation of fewer metrics to confirm policy is
being followed than does an investment policy that is permitted
to create risk exposures. As you move through the risk spectrum,
broader duties of due diligence are created. This creates work that
needs to be verified under the supervision of whatever individuals
have ultimate responsibility for the outcome. The relative ease of
supervision of lower-risk investments may form the basis of an
explanation of the equity risk premium. The size of risk premiums
may be related to the scarcity of free time among individuals in
supervisory positions in organizations with the capacity to invest
in riskier investments.
This thinking can be extended to other types of risk controls
as well. Organizations that maintain appropriate risk controls
also implicitly choose policies that have supervision requirements
that can adequately be met by the organization. The supervision
requirements for the policy need to fit appropriately with the
reporting and workflow of the organization. One form of orga-
nizational failure that can occur is for management to implement
policies that it cannot hope to supervise due to these policies
requiring a greater quantity of supervision than management is
capable of providing.
Thought experiments may be helpful for evaluating this kind of
issue as well. If we had a log of all of the activities of an organization,
can the events needed to implement the policy be inserted into
existing work in a simple manner? Putting this another way, are
the requirements of the new policy correlated with requirements
of existing work? If we add the new required events to the log,
how much does the compressed size of the log increase? There is
potentially more nuance that can be added to the analysis, but these
kinds of questions may give rough estimates of how supervision
requirements will change with the new policy.
Considerations about complexity are, well, complex. But understanding what we mean when we attempt to quantify complexity—and understanding the practical limitations and costs
associated with incremental refinement to complex systems—helps
us perform our actuarial duties better.
MATTHEW POWELL, MAAA, ASA is an assistant actuary
with Segal Consulting based in Atlanta.
[ 1] Shannon entropy provides an absolute limit on the best possible
average length of lossless encoding or compression of an
[ 2] A Huffman code is a particular type of optimal prefix code that is
commonly used for lossless data compression.
[ 3] Lossy compression is the class of data encoding methods that uses
inexact approximations and partial data discarding to represent