Post 5 - A Critique of Modern Modeling

Turkeys Should Not Go Swimming

Moving Beyond the Mean

Some of my favorite childhood memories take place at my cousins’ lake house along Lake Weir. This lake is located in a little town in southern Marion County, FL (near Ocala), which was about an hour or so north of where I grew up. In a typical Florida move, it was named after an old state land commissioner Nathaniel Ware, but they misspelled his name when they were officially naming the lake.

Ocala Star-Banner Newspaper - February 21st, 1988

My favorite memories on the lake were usually around the fourth of July – the lake was generally fuller (you can almost always count on rain in the summer), so we were able to leap off the end of the dock at the little beach access we had right next to the house. One time I pushed my cousin off the end of the dock and he landed directly on his stomach, puking everything he had just eaten upon impact. No one in my family was too excited about that one.

Other times the lake would not be so full, receding so much as to completely expose the columns supporting the very end of the dock. Sometimes, the lake was so low that it felt like you could wade clear across to the other side.

But could we?

The Genie of Lake Weir

Let’s say, during a time when the lake had really receded, the genie of Lake Weir appears and offers my cousin $10,000 if he could wade to the other side (he can swim in real life, but for this example let’s pretend that he can’t). In order to complete this task, the genie offers him any one piece of statistical information he wants. What question should he ask?

Let’s say he asks the genie how deep the lake is, on average. A reasonable question, perhaps. The genie tells him that right now, the lake is, on average, 4 feet deep. Just 4 feet? He could easily wade through 4 feet of water (he’s 6’6”). The offer is starting to sound good. However, he is still nervous to cross. Why would this be? His instincts know better.

Pretend he turned his brain off and accepted the genie’s offer. He starts wading through the lake, only to feel the floor drop right out from under him, and he drowns. In the afterlife, he gets pissed at the genie for “lying” to him. However, the genie adamantly claims that he told him the truth. Finally, after enough complaining, the genie decides to rewind time, erase his memory of drowning, and grant him another question. What should this next question be?

Now my cousin asks for the variance of the depth of the lake. Let’s say the genie came back to him and said that Lake Weir’s depth had a variance of 8 feet (the lowest depth is obviously capped at 0). Boom, he immediately knows not to take the bet – that is way too much fluctuation to be confident in his survival. However, the genie actually tells him the depth of the lake varies by half a foot. My cousin calculates that an average of 4 feet and a variance of half a foot gives him a 99.7% chance that the water level stays at or below 5.5 feet as he wades across. Even on the off chance it goes above that level, he likely still will be able to breathe. He starts wading through the water, the floor drops out from under him, and he drowns again. In the afterlife, he calls the genie a liar again (he doesn’t remember the first time) and claims it should have been nearly statistically impossible for him to drown. The genie is having fun now and decides to give him one last shot. He wipes the memory of his death one last time and rewinds the clock. My cousin has one last shot.

Finally, he arrives at his last question. He asks the genie “What is the kurtosis of the depth of the lake?” (nerd) The genie tells him that it is 30. My cousin instantly declines the offer and moves on with his life. He knows that even with an average depth of 4 meters and an average variance of half a foot, a kurtosis of 30 (or 10x the normal distribution) implies the distribution is definitely not normal. If assuming a different distribution with those parameters, say something like a student-t distribution, he knows he could potentially experience depths of 10 feet or more.

Your Model Is Wrong

Now let’s bring my cousin’s lesson to the corporate finance and forecasting world. The underlying philosophy of corporate finance is taught very simply: If the net present value, or NPV, of an action, is positive, you perform that action and it will add that NPV to the value of the firm. If you are presented with multiple positive NPV options and must choose just one, you perform the action that will lead to the highest NPV. Simple, but reality is never that easy.

Given the uncertainty of the future, there clearly is a range of NPV outcomes that can occur. Take a real estate acquisition or development, for example. The inputs (assumptions), like rent growth, vacancy, and the terminal cap rate (terminal value) that allow you to get to an NPV could also land within a range of different outcomes and, more than likely, will vary throughout the life of the project. Today’s models cannot account for that reality.

A deterministic (point-estimate) model, which I will describe as the “plug-and-play” kind of model we use, allows for no such variation. These deterministic models have you take your singular best guess (perhaps an average rate of growth, or average terminal cap rate (value), etc.) for each assumption and plug it into the model. After some simple discounted cash flow (DCF) or other math, boom - you have your property value, budget, stock price, NAV, etc. Again, these are the types of models you likely use every day.

Let’s zoom out and talk about these models in more general mathematical terms so that they apply to everybody. Let’s call the inputs, or the assumptions you make within your model, “x,” and refer to the end value/price/budget/outcome as F(x). Deterministic models focus on predicting “x” (what will growth be? What will expenses be?) without any real regard for how each assumption and the magnitude of a potential assumption error affects F(x). I’ll explain:

The effect a change in each “x” has on F(x) is important to understand because it allows one to gauge the credibility of a point estimate. If a small change in “x” has minimal effect on F(x), then mis-forecasting that variable becomes less of a problem. If I’m valuing a commercial property, and I mis-forecast a negligible expense (even if I made a large error, say by 100%), it won’t have much of an impact on the overall outcome.

However, if a very small change in “x” has a drastic effect on F(x), then it is much more important to understand how that “x” can behave in other ways. We’ll call these “x”s “consequential.” In this scenario, relying on an average value of a consequential assumption (“x”) becomes particularly inadequate if that assumption belongs to a fatter-tailed domain. In probability speak, higher statistical moments become much more important to understand than the first central moment (for example) if “x”s effect on F(x) is convex/concave. In normal language, my cousin's understanding of how the depth of the lake could “spike,” analogous to the kurtosis, was much more important than the average and variance.

Let’s take rents and the terminal cap rate (ending value), for example, in the context of retail commercial real estate. Of all the “x”s input in a commercial real estate model, these two typically move the “value needle” the most. One could get every other estimate perfect, miss on rent growth and terminal cap rate by a slight amount, and realize a massively different NPV than originally underwritten (modeled). By nature of these types of assumptions being more important, they should get fuller attention.

By fuller attention, I mean their complete dynamics should be understood and modeled. In commercial real estate, we typically add 0.5% to our initial cap rate and call that our terminal cap rate (For the uninitiated, a cap rate is just the income of a property divided by the value. As a cap rate goes up, ceteris paribus, market value goes down. Assuming an additional 0.5% to an initial cap rate as your terminal cap rate is considered a “conservative” assumption).

But how do market values behave in real life? The arbitrary addition of 0.5% is completely outside of what we know the market to do. Sometimes values go up, sometimes they go down (genius insight, I know), and sometimes they behave wildly. As we know from the LTCM case, the values of the trades they were making were stable until they weren’t. They assumed Gaussian math applied when it didn’t. They (thought they) understood the average and variance of the depth of the lake, but not how it could spike.

The terminal cap rate, or terminal value of a real estate property, falls into this same category. In reality, the changes in value of commercial real estate are both non-Gaussian and fat-tailed. We know this for a few reasons:

1) The behavior of a REIT’s relative stock price changes, which should be a gauge of their underlying value, behaves much like IBM’s (as do most stocks). It can be steady for a while, and then COVID, the GFC, can hit and stock prices can be cut in half overnight.

2) If we are under the assumption that any capital asset should be priced relative to the risk-free rate, or the yield on the US treasury (it should), then both that treasury yield and the spread on that yield must be “normal” for the asset’s price behavior to also be normal. A paper published in August 2020 (Kiss, 2020) looked at corporate bond spreads (and other leading indicators of economic activity in the US economy) and concluded they were all fat-tailed. The kurtosis of the yield spread was ~8.01 when measuring from 1980-2020. If your asset is priced relative to something that is fat-tailed, then your asset is, in fact, fat-tailed.

3) Even if you had no clue about what happens in the public market, when shit hits the fan, as it has with COVID or inflation (more specifically, with the interest rate increases), the private markets completely lock up. The bid-ask spread becomes too wide and nothing transacts (A buyer’s cost of capital updates immediately, something sellers don’t always quite understand). These extreme periods of complete lock-up signal that the underlying behavior of commercial asset prices is fat-tailed.

Because of the fat-tailed nature of terminal cap rates, it matters to understand not just the average scenario (or the “less-than-average” scenario: arbitrarily adding 0.5% to the initial cap rate), but how that average scenario can vary, skew, and spike. In short, the argument is that the full dynamics of the system matter more and more to the value of the property as we wade into a more extreme (fatter-tailed) domain.

Thinking about data and forecasting this way makes sense. History, financial or otherwise, is largely the study of surprising events that have never happened before. Only ex-post were the rare events “obviously” explainable and predictable. If one doesn’t trust this fact, look back at future assumptions made by research analysts and economists that were made in late 2019 (pre-COVID), early 2007 (pre-GFC), early 2000 (Pre-Dotcom), late 1997 (pre-Russian Debt Crisis), etc. These events are all massive drivers of our financial history and none of the impacts could have been predicted by assuming the “average” scenario.

The lesson, in the context of modeling and forecasting the future, is that it doesn’t matter how much money the average-assumption scenario makes if the perturbance of the higher moments of those assumptions can lead to F(x) drastically accelerating into unacceptable negative territory, as it can in financial markets. The valuation of a property, security, and/or company should be much more holistic than modeling simple averages or singular guesses. Now let’s get more real-estate specific.

How the (Turkey) Sausage is Made

Today, the commercial real estate industry operates using very few legacy software programs (ARGUS and RockportVal are the most notable, and they are interchangeable in the critiques going forward). The legacy software employs a standard DCF-based evaluation method whose output is a single-point estimate of a leveraged and unleveraged IRR and NPV. This entails, just with any model, making assumptions. Assumptions have to be made for every aspect regarding the future of the property, including all the inflation rates, general vacancy rates, credit loss, rollover (including completing the task of blending market rents based on renewal probabilities, one of the largest mistakes made by ARGUS and real estate underwriting community writ large), how other revenues and expenses will behave, etc. All of these future assumptions, by nature of the software, are Platonic and static. For example, one might assume market rent grows at 3% per year, expenses at 2% per year, a static average vacancy rate of 90%, and that there is a 75% chance that a tenant renews at the end of their lease term throughout the life of the investment. One justifies these assumptions because they are “the historical average.” This is, of course, not how a property or a portfolio of properties behaves in reality. Rent growth and expense growth, for example, can be lumpy; they can go down some years and go up significantly the next (similar to the chart of IBM’s real price changes).

Vacancy, renewal probabilities, and releasing capital expenditure (tenant improvements, landlord work, leasing commissions) spend, as another example, are typically assumed to be static rates. However, vacancy is a function of the size of spaces within a center, so it fluctuates as tenants move in and out of different-sized spaces within the property; a tenant moving out of a larger space means the vacancy hit is greater. The probability of a tenant renewing their lease can go up or down as the macroeconomic winds change; tenants fail at a higher rate in poor markets as evidenced by declining occupancy rates during recessions. And, of course, the cost to release a space (releasing CapEx) is a function of construction costs and the rent a tenant will pay, which can vary wildly.

Because of the static, Platonic nature of the legacy software, rents and releasing CapEx are handled in another very poor way: via blending. For example, if there is a 75% chance a tenant renews by accepting their option at $20/sf, implying there is a 25% they move out (i.e. a rent of $0), the legacy software will calculate a blended rent number of $15/sf: ($20*75%) + ($0*25%) (unless a market rent assumption is given, then that number is blended into the future rent).

This same blending logic also applies to the amount of releasing CapEx that will have to be spent on the space. The legacy software will carry forward that $15/sf rent assumption and a similar blended tenant improvement, white box (AKA landlord work), and/or building improvement assumption into the model, even though there is a 0% chance that $15/sf and some blended level of releasing CapEx actually occurs at the tenant’s option decision point. The future of that space is binary; it will either be $20/sf or $0/sf in rent for a certain period of time. It is either a larger amount of releasing CapEx or a lower amount of releasing CapEx to be spent in the future, not the blended amount. Yet, the property is being valued based on something that will not happen.

However, the worst of all Platonic assumptions might be how we handle the terminal cap rate, as this is the largest driver of NPV uncertainty. As mentioned previously, the terminal cap rate drives a significant (and, a lot of the time, the majority) portion of the value of a real estate investment. Many just assume that a simple 0.5% increase to their initial cap rate. While this may be seen as a conservative move to make, a standard 0.5% increase is completely arbitrary and is common practice simply because that is “the way it has always been done.” The future terminal cap rate, of course, can be a range of possibilities, and should largely be a function of the risk-free rate, ceteris paribus, which can vary wildly. A small change in the risk-free rate, and thus the terminal cap rate assumption, can drastically change the market value of a property. If such a small mistake in one assumption can drastically alter the value of a property, then how could one have any confidence in their projected, point-estimate value, especially 10+ years into the future?

The Fragility of Core Assets: A quick aside: Core assets, or those that trade at lower cap rates because they are better located, more leased, more desirable, etc., are more fragile to slight changes in cap rates. In the graph below, the red curve shows how the value of a property trends if the initial cap rate is 3%. The purple curve shows the same trend if the initial cap rate is 7%. Each point on the line represents a 0.25% change in cap rate.

Core assets are thought of to be “safer” than value-add projects, but this is just in reference to the robustness of the cash flow. The other largest driver of value is the terminal cap rate, which is, in part, a function of risk-free interest rate, as are all investments. Ceteris paribus, a core asset’s value decreases faster when interest rates rise versus a value-add project*: It takes a 0.75% increase in terminal cap rate to realize a 20% loss in value for the 3% cap rate core asset, whereas it takes a 1.75% increase in terminal cap rate to realize a 20% loss in value for the 7% cap rate value-added project (the same is true on a portfolio NAV basis – A 100 basis point increase to a company’s implied cap rate that is lower relative to another company whose implied cap rate is higher results in a larger loss in NAV for the company with the lower implied cap rate). Mathematically, we can tell that the derivative of the red curve is more negative than the derivative of the purple curve at the same point on the y-axis, implying heightened “value fragility” for the core asset.

However, the opposite is also true. A reduction in terminal cap rates results in accelerating gains for both properties, but more so for the core asset. But, there is a limit to how much more the cap rate of a core asset can decrease since it is already bumping up against the risk-free rate. Therefore, it can be argued that, theoretically, a value-add asset is less sensitive to downside cap/interest rate risk and has more upside in a declining interest rate market.

*One could argue that the US 10-year treasury yield vs cap rate spread should gap out further for value-add assets because of the differences in the stability of the cash flows. It is generally true that core assets have more robust cash flows, but the value of the asset, and therefore the collateral to a secured loan, can drop much faster. From a lender’s perspective, the risk of getting paid the monthly coupon may go up for more-threatened cash flows (value-add asset), but the risk of getting paid back a balloon payment at the end of the life of the loan, via a sale or refinance, is higher for an asset whose value is more sensitive, ceteris paribus.

The Real Option

to Cancel Thanksgiving

The other major flaw in DCF valuation is the lack of being able to account for, or value, optionality. More specifically, DCF valuation cannot value what are called “real” options. Real options operate in the same way financial options do: One has the right, but not the obligation, to perform a certain act at some time in the future. An example will help illustrate this further.

Say one has a commercial retail real estate development opportunity in its early planning stages. It is planned to be a typical grocery-anchored neighborhood shopping center with a mix of shop space directly adjacent to the grocer. However, there is land in the front of the property that can fit up to 5 “pads” or out parcels. Pretend, for this example, that the year is 2010 and businesses still haven’t recovered from the Great Financial Crisis. Therefore, one is not confident that there is much, if any, demand for potential pads in the front (for this example, the “pads” simply represent the opportunity for extra leasable space to be added/built in the future. Pads can be more sought after than normal in-line space, but that point is irrelevant here). So, one chooses just to build the “vanilla” shopping center without any of the five pads and underwrites the project as such.

But is this the true value of the investment? On the surface, it seems to be correct since you are only building the vanilla shopping center without the pads. Why would you value something that won’t exist?

Well, it turns out 2010 - 2019 was a decade of declining interest rates and lease-up of portfolios from GFC lows. So, the demand for pads could very well have manifested itself as the economy recovered. Simply put, once a developer noticed sufficient demand for the pads come to fruition, then they would decide to build them, hopefully adding to the final NPV of the project. This optionality should add value to the original property.

Unfortunately, the deterministic DCF valuation method can’t account for the future value of the option to build the pads as demand comes to fruition. One would have to underwrite the addition of the pads as an additional phase and add it to the total for the project, something that is practiced in reality today. This seems acceptable on the surface, but there is a problem that occurs during the initial valuation process.

Think about two scenarios, the vanilla shopping center without the addition of the five pads and the full shopping center with the addition of the five pads, in the context of traditional DCF valuation and basic sensitivity tests. Modeling the vanilla shopping center without the addition of pads will only allow one to value the potential upside and downside of that one scenario. Modeling the full shopping center with the addition of all five pads will allow one to see the potential upside and downside of just this full shopping center scenario. In reality, there should be a combination of the scenarios.

Because a developer would not decide to build the pads unless there was confidence that space would be leased, the downside of the entire project should only reflect the downside of the vanilla shopping center scenario. If there was confidence that the pads’ space would be leased, then the developer would decide to build, so the upside should reflect the full shopping center scenario. Said another way, the upside to the vanilla shopping center scenario and the downside of the full shopping center scenario should never be realized. Therefore, they shouldn’t be priced in as part of the value. Yet, when originally valuing the vanilla shopping center scenario, for example, one is modeling and theoretically “pricing in” a smaller potential upside than would be realized in reality; That option to build in the future has inherent value. You could be making a mistake by rejecting the project without considering the value of the real option when in reality the project pencils.

How does this affect the way we should think about development? Of course, one should generally always maximize the footprint of a development if the demand is there, but just because you can’t maximize the footprint initially doesn’t mean the project can’t be accepted. In a recessionary scenario where demand for space may not be immediately obvious, it still could behoove a developer to hold onto (or buy more) land and/or build a smaller, still-profitable structure that can absorb the initially limited demand. Your worst-case scenario is the negative NPV from lack of demand for the smallest-scale, yet still feasible, project and your best-case scenario is the upside of a more grand project. That inequality between the downside scenario and the upside scenario inherently adds value to any project with optionality. A more concrete example will be given in post 6.

In this next post, we stop complaining (not really) and build ourselves a tool to solve all the issues spoken to in this post. Thank you for reading.

 

 

 

Reply

or to participate.