We’re thrilled to bring you this in-depth discussion with Dr. Vilkelis centered around his paper, The Mathematically Correct Way of Calculating Early-Stage Startup Valuations. You can download the paper below and we’ve included an additional conversation and FAQ for those interested. For more information on Dr. Vilkelis or to reach out directly, see below:

Gintas Vilkelis, PhD | +44 (0)7878 494 772 | gintas@gintasvilkelis.com

 

If you’re a startup founder, you’ve no doubt at some point in your company’s history, whether it’s been in the dreaming stage, fundraising, or exiting, asked yourself, “how much is my company worth?” Venture Capitalists as well need to ask these questions when considering writing checks and managing portfolio risk. 

For those who have gone through these exercises, especially with early-stage companies, setting valuations is often a combination of simple back-of-the-envelope calculations and peer comparisons. The problem with these methodologies is that they lack the rigor that’s necessary for both parties to better agree to the level of risk, and therefore valuation, of these companies.

Is there a better way to value early-stage startups that relies on more sound fundamentals of finance? 

As a guide to this paper, we chatted with the author to better understand the key points made in his paper. If you’d like to jump right into the paper, go ahead and download it below:

Download the full paper

From the author: The paper is 30 pages in total, but significant portions of it contain (1) convenient reference section about “how it’s being done now” (for those who are curious), and (2) the full mathematical derivation (for those who want to see it), which can be skipped or skimmed through quickly by those readers who aren’t particularly interested in those types of details.

Therefore, to optimise the reading process, it is suggested:

  • First, read the Executive Summary, which will give the most important parts and a good overview of the contents of the whole document.
  • Then, read How to extract the most value from reading this document section that will give suggestions which parts of the document (based on your “reader profile”) you should definitely read vs. which ones you can safely skip without missing out on what’s likely to be important to you.

Setting the Table

A simplified framework for Vilkelis’ paper can be broken down into the below logical exercise:

Myth: If investors diversify and set favorable terms for their investments, they are likely to succeed over the long term (i.e. generate favorable risk-adjusted returns).

Truth: Few investments generate positive returns, and most Angels fail to generate adequate risk-adjusted returns.

Why: Investors do a poor job of identifying singular winners because current valuation methods do not allow for the robust diligence of companies, instead relying on the simplistic ‘rule-of-thumb’ approaches that apply too generally and broadly (i.e. they are not idiosyncratic enough to accurately reflect the true risk and return estimates for specific companies).

Solution: A discounted cash flow approach to valuation that better enables investors to identify high-potential opportunities while mitigating risk.

Ready? Let’s dive in!

 

GoingVC: So if we start with the assumption that there exist some pretty significant flaws in the early-stage valuation practices, what consequences does that have towards mitigating risks, and therefore ultimately improving returns for GPs and LPs?

Vilkelis: It’s always important to make sure that the chosen tools are appropriate and applicable to the situation at hand. As far as diversification is concerned, for example, it’s important to understand what it can and cannot do.

What diversification really does, is reduce the magnitude of the statistical fluctuations (the magnitude of the standard deviation from the mean outcome), which includes reducing the probability of the portfolio’s catastrophic failure. 

This means that diversification offers adequate downside protection only when the portfolio has been constructed in a statistically-sound manner:

  • If the volatility of each portfolio company is relatively low (e.g. blue-chip companies), then even a portfolio, containing a relatively small number of companies, will have a low probability of catastrophic loss.
  • But when the volatility of the portfolio companies is such, that (1) the vast majority of them will eventually fail, and (2) a small percentage of them will end up producing disproportionately large returns (which is the case with start-ups), then it takes a much greater degree of diversification to make the probability of catastrophic loss acceptably low.

GoingVC: Right, every Finance person knows when it comes to diversification, it’s actually the correlation between the assets that’s most important, but of course the number of names is important as well – and I’d argue that for GPs, those correlations (more simply put – the variation in outcomes) are essentially impossible to know, so increasing the number of bets is the most practical way to mitigate some of that ‘catastrophic-loss’ probability. 

Vilkelis: Indeed. The two key parameters affecting the probability of portfolio’s catastrophic loss are: (1) the probability of success of each portfolio company, and (2) the degree of diversification that matches this “probability of success” parameter: the greater the probability of success, the smaller the number of companies can be in a portfolio to offer the same degree of downside protection.

This means: if you want to create a sound portfolio composed of a relatively small number of companies, you must significantly increase the success rates – which comes down to: (1) utilizing the investment selection criteria that are better able to differentiate between the potential winners and losers, and (2) after investing in the potential winners, focusing single-mindedly on doing whatever it takes to help your investees succeed.

Of the two above mentioned key parameters, the “probability of success” is by far the more important one, and this is where the currently-used early-stage valuation tools (Berkus, Scorecard, and to a slightly lesser degree – VC) are woefully inadequate, because they’ve been built on the explicit assumption that all companies, operating within the same industry, are relatively similar to each other (this is one of the key assumptions of the VC method), while Berkus & Scorecard are even worse, because they assume that all early-stage companies are essentially the same, regardless of the industry in which they operate, or of the size of their target market (which can vary by many orders of magnitude one from another) – in other words, all of these models explicitly disallow the possibility of the existence of big winners (which we all know to be objectively false, because unicorns do exist!).

GoingVC: So from a statistical point of view, you’re implying that the degree of confidence cannot be anywhere near where it needs to be in order to create concentrated portfolios, which are pretty much a hallmark of VC funds – and hence the over-reliance on finding that one unicorn to generate the portfolio level required 3x return.

Vilkelis: The typical situation these days is that the reported probability of an early-stage Angel investee eventually becoming a huge success is around 0.5% or so, and the probability of it becoming a modest success is around 3%. Given these success probability magnitudes, the probability of a catastrophic failure of a 10-company portfolio is close to 95% (i.e. 19 out of 20 portfolios will fail), and for such a portfolio to have the 80% statistical probability of success, it would have to contain 320 portfolio companies. Very few early-stage investors manage to assemble portfolios of such size.

GoingVC: So where do the valuations come into play in relationship to returns?

Vilkelis: As far as the investment terms are concerned, given the poor ability of the currently-used tools to differentiate between the potential winners and losers, the current prevailing attitude appears to be “let’s reduce our cost base by investing based on the valuation figures that are as low as possible, so as to increase the probability that the outsized returns from the few winners might more than compensate for the losses from the vast majority of the investees who will fail”.

The concern about not overpaying (due to the over-inflated valuations) is of course very valid and important. But there are two major hidden pitfalls hardwired into the valuation methods that are designed to produce universally low valuations, while at the same time not adequately discriminating between the future losers and the likely big winners:

First, you are already overpaying (by orders of magnitude) for the vast majority of your investments, because the future loser’s valuation at any point in time should be zero or very close to it, instead of it being in the hundreds of thousands or low millions (which is the range of valuation figures that Berkus and Scorecard methods are hardwired to produce).

Therefore, given how easy it is to invest in the wrong things, this means that by far the most important concern and focus should be on “what to invest in”, rather than on “how much to pay when you do.”

Now of course, the future is fundamentally uncertain, therefore being able to predict the future winners and losers with absolute certainty is impossible (hence the loss-making investments can never be completely avoided). But reducing their number (by using better-suited tools) is certainly possible, and if the percentage of the losing investments were to be reduced from the current 99.5% to, say, 90%, that would increase the portfolio’s performance 20-fold, and would reduce the number of portfolio companies needed for achieving the 80% statistical portfolio success probability from 320 companies to just 15.

Second, under-investing into the companies that do have a chance of becoming hugely successful, can significantly reduce the probability of their eventual success: either because they might end up lacking the necessary resources to make an important strategic move that could “make or break” their future prospects, or also because the early rounds of funding on the terms based on artificially low valuation could screw up the Cap Table to the point, where during the later rounds, an otherwise-very-viable company can become “uninvestable” for reasons of the prospective investors thinking that the founders might lose their motivation, because they would own too small an equity share of their company. This is one of the commonly-encountered reasons why VCs decide against investing in the otherwise-attractive companies, and this inability to attract the later-stage investors can reduce the company’s future potential by orders of magnitude, or even doom it to a slow and languishing death. 

That’s one of the reasons why being able to calculate the valuation as correctly as possible at each round is so important, but the currently-used tools for valuing early-stage (especially. pre-revenue) start-ups are inadequate for this task, because, as I detail in my paper, Berkus and Scorecard (as well as all of their derivatives) are basically the qualitatively valid diagnostic checklists (which correctly point out to which of the start-ups’ attributes the investors should be paying attention), but the way their authors then went about quantifying those factors, nearly completely lacks numerical validity; and the VC Method (most commonly used in the slightly-later early stages), while considerably more numerically-valid, has some serious deficiencies also. 

GoingVC: So let’s get into what exactly you are advocating for when it comes to early-stage valuations. What’s the high-level takeaway for readers?

Vilkelis: To simplify, I am advocating practitioners move from overpaying for the majority that will be losers and underpaying and/or underinvesting into the future winners (which then reduces the probability of them becoming winners) to overpaying for fewer potential losers, and funding the potential winner in ways designed to optimise the probability and magnitude of their future success.

GoingVC: What I hear when you say this is championing a bit of a non-intuitive approach for VCs – don’t naturally assume that there should be extensive losses in the portfolio “because that’s the nature of the game” but instead apply more sophisticated and rigorous valuation and risk management practices that go well beyond naïve diversification to actually build high-conviction (vis-a-vis a higher degree of confidence in selected investments), diversified portfolios that both lower the risk and increase the number of winners – a pretty compelling combination.

Vilkelis: Precisely. 

FAQ {Financial Acumen Queries}: 

For those interested in more details, we have shared our back and forth with Gintas below that further dives into the details of his paper:

FAQ1

GoingVC: I’d like to start by saying I agree with your premise almost unanimously, in that early stage valuations are often lower, but not necessarily due to the assumption that it’s because the companies are simply early-stage. I would counter that ‘valuations tend to be smaller’ is justified given the necessarily higher discount rate that needs to be applied given the exponentially higher level of risk and uncertainty. Most companies have barely functioning business models, let alone revenues or cash flows. 

Vilkelis:  A small but important distinction here: what I see wrong with the current situation, is not the fact that “early-stage valuations are lower because the companies are simply early-stage”, but rather the fact that “poorly-suited-for-the-purpose valuation tools (Berkus, Scorecard) become ‘the automatic choice’ whenever the companies are early-stage”.

In other words, the issue is not with the valuation figures themselves, but with the choice of methodology that gets automatically applied in those situations.

It is of course perfectly normal for the early-stage companies to have significantly lower valuations, compared to when they’ll become mature companies – precisely for the reasons that you mentioned (predominantly due to the much higher level of risk and uncertainty).

But how that risk and uncertainty are quantitatively factored into the valuation calculations – this is where I see the problem: the correct way of doing it is via the “probability of success” parameter in my M-DDM formula, while how it is done currently: “let’s switch to the qualitative-based models like Berkus or Scorecard” (which are hard-coded to produce valuation figures to be within the artificially-chosen relatively narrow range of values that take some important characteristics of the companies in a manner that makes very little quantitative sense (as I’ve detailed in the ‘Summary & Critique of the 3 methods’ section of my paper)).

FAQ2

GoingVC: Continuing on that, the back-of-the-envelope approaches (Berkus, VC, etc) are, yes, simple, but create a standard method for estimation for companies between the “idea” stage and “functioning, pre-cash flow positive” stage; i.e. they’re different flavors of peer/comparables approaches – and given the highly variable nature of outcomes within early-stage investing, missing the valuation by a couple million bucks that early in the game doesn’t truly affect the outcome in what is essentially a lottery: a high chance of 0% return, and a very small chance of very large return. The 0% return case dominates the valuation for startups, so this again justifies lower early-stage valuations in my opinion. Otherwise, the analysis would be subject to potential survivorship bias if multiples and valuations were drawn from the sample of companies that were successful, which are the few, not the many.

Vilkelis: Yes, here you have quite accurately characterised the problems with the current state of affairs. The way I see it, early-stage investing can be made “much less of a lottery” if the valuation calculation tools can enable the prospective investors to be able to much better detect the “diamonds in the rough” within the pile of “glass beads”.

As I’ve detailed in the paper, one of the serious flaws that all three of the current valuation methods have in common, is that the implicit assumption that “diamonds (or unicorns) can’t exist” has been hard-coded into them, the consequences of which: those models will not be able to detect “diamonds” if they operate under the assumption that “all rocks are basically similar to each other, hence diamonds can’t exist”, which then leads the early-stage investing to be “essentially a lottery”.

What I discovered during the process of writing this paper, is that actually a lot can be done: first, to dramatically increase the sensitivity of the pre-investment & due diligence processes to detect the future unicorns, and secondly, to detect and possibly mitigate quite a few of the potentially-lethal risks to the otherwise-highly-promising projects

In other words: (1) identify the potential future big winners, and then (2) improve their chances of that big success even further.

Furthermore, I came up with interesting insights on how business plans with a high probability of creating future unicorns could be “assembled at will”, and under which conditions this is possible.

FAQ3

 GoingVC: While the DDM/DCF model is a fantastic academic model for valuation, I’d be curious how you apply to zero cash flow businesses? DDM is generally best applied for more stable, dividend-paying companies, and given the uncertainty and lack of these dividend payments from startups, I’m curious how you approach that problem.

 Vilkelis: The short answer to this: fundamentally, the difference between the pre-revenue stage companies and the well-established companies is actually much smaller than what most people perceive: by its very definition, valuation is ALWAYS based on the FUTURE cash flow (which is fundamentally unknowable with certainty, and can only be guessed (or educated-guessed)). The only difference between those two groups of companies is in the degree of confidence in the accuracy of that assumed future cash flow time profile.

 The reason why the traditional DDM works well for the more stable companies but not for the early-stage start-ups, is because the traditional DDM is based on the implicit assumption “THIS is the future cash flow. What’s the valuation based on it?” – which works well in the case of stable companies, because their future cash flow time profile can be predicted with sufficient degree of confidence.

 But in the case of the early-stage (esp. pre-revenue) companies, that’s usually not possible, because the uncertainty is just too high. And if the investors were to say “let’s calculate the valuation based on the most likely future scenario”, they would immediately run into an intractable problem: statistically, the most likely scenario is that a start-up will eventually fail, in which case its valuation is obviously zero – which means that start-up investing becomes: “the investor invests $0 at the $0 pre-money valuation, for which he gets $0/$0 percentage of the start-up’s equity. The company eventually fails, leaving their investor’s equity share worth $0” – which would basically mean “start-up investing makes no sense”.

 And that’s why, for the reasons of lack of availability of proper valuation tools, models like Berkus and Scorecard were created. But now, M-DDM finally “puts things into their proper places”, which should have been done from the start!

 One other reason why “zero cash flow” is not a major impediment to using the DDM-based valuation models, is because when it comes to investing in the early-stage start-ups, the investors are not doing it on the expectation of immediately starting to collect the dividend payments from the company’s profits: since they are investing on the expectation of profiting from the capital gains at the “liquidity event” (which is still quite a few years in the future), this means that what really matters, is the cash flow at the moment when the liquidity event starts being sought, while the cash flow now is rather immaterial and will have no influence on the outcome of the liquidity event: the new buyers in 2027 are not going to care one bit about the company’s cash flow situation in 2020!

 The only purpose that the non-zero cash flow serves in the early stages, is as a signal that certain types of potentially-lethal risks (esp. the product-market fit) are being reduced (which increases the probability of the company’s eventual success). In other words, the early-stage cash flow is primarily an indicator of the “risk of failure”, rather than of “the magnitude of the future dividends at the mature stage”. So these are the reasons why zero cash flow is not a “show-stopper” to utilising the DDM-based valuation models!

 

FAQ4

GoingVC: How much more accurate can a DDM model be with so much riding on a single growth and discount rate estimations for companies so young?  Are you implying that better approaches to due diligence could lead to lower-risk (i.e. better odds of success) investments and therefore your discount rate declines?

Vilkelis: Essentially, yes! But with a small (technical) clarification: it’s important to highlight here that under the M-DDM model, time discount rate and risk factors are completely decoupled:

Valuation = (Probability of Success) x (Full Valuation @ Success) x (Time Discount Factor)

where each of the 3 factors neatly segregate the groups of closely-related factors:

  • All of the future risk of failure is encapsulated entirely within the “probability of success” parameter;
  • All of the company-related monetary factors (including the target market size, profit margins, etc.) are encapsulated entirely within the “Full valuation @ success” parameter;
  • All of time-related factors (esp. (1) investor’s “risk-free required rate of return”, and (2) how many years it’s expected to take for the company to reach its full potential, under the assumption that it will) are encapsulated entirely within the “time discount factor” parameter.

 

So it’s a very “clean” and transparent way of understanding and calculating valuations.

Furthermore, it is this explicit decoupling of the risk from the “required rate of return” that makes M-DDM more accurate and more usable, compared to applying the original DDM (where risk is implicitly made an integral part of the RRR) — because nobody really knows what’s the correct way to incorporate the risk factor into the “required rate of return” parameter

So, one of the major benefits of M-DDM is a much greater degree of transparency, especially in terms of how the required rate of return and risks interplay

  • Under M-DDM, if you do a decent job at assessing the risks’ probabilities, then the valuation of your investment will be growing at the rate close to your RRR, while
  • Under the latter 2 of the above scenarios (where the risk factor and the time discount factor have been completely intermingled), the relationship between the risks and the RRR is so murky and opaque, that the human brain just gives up, and the predominant thought in the investor’s mind becomes “I just have to make sure I negotiate the valuation low enough, so that it doesn’t come back to bite me a few years from now…”

That said, calculating the risk probabilities is still a very complicated task, where no calculations will be “exact” – we are still talking about predicting the future without a crystal ball… But what M-DDM uniquely does, is two important things:

  1. It is the fully mathematically correct way of calculating the valuations, and it’s the most universal embodiment of the DDM principle.
  2. It simplifies as much as possible everything that can be simplified (as far as math is concerned), so that the only “tricky” thing left to do, is predicting the future – which of course is not a trivial thing. But now at least the flawed math no longer gets in the way…

To put it another way, to do the calculations correctly, one needs to plug the correct inputs into the correct formula. As far as calculating valuations is concerned, the inputs can never be 100% accurate because of the fundamental uncertainty about the future events. But what M-DDM accomplishes, is provide the correct formula to plug those imperfect inputs into – i.e. by fixing the formula, it eliminates the currently-dominant source of errors.

FAQ5 

GoingVC: Why doesn’t DDM get used now for calculating early-stage valuations?

Vilkelis: Because DDM doesn’t have the ability to do multi-scenario analysis. Why is this important? Because DDM basically answers the question “if the future cash flow is X, then what is the company’s valuation, based on that? 

This works well enough when the company is well-established and has the past performance record that can be extrapolated with a considerable degree of confidence.

But in the cases of early-stage pre-revenue start-ups, there is no singular future cash flow prediction that can be extrapolated with any kind of confidence, because in such cases there are at least two dramatically different future scenarios: (1) a complete failure (which is statistically the most likely outcome), and (2) a great success (and possibly a few other feasible outcomes in-between these two extremes).

Therefore, if you try to use the DDM in this kind of situation, you immediately hit a fundamental question: which future cash flow scenario to use for the purposes of the valuation calculations?

The dilemma then is this:

  • If you use the statistically most likely future scenario (which is: “the complete future failure”), then what’s the point in considering the investment, if you are assuming that you are definitely going to lose?
  • And if you try to use e.g. “the maximum success” scenario, then the investors would have a naturally-justified reaction of: “what makes you think that this is what’s actually going to happen? Statistically, this is definitely not the most likely future scenario!”

Clearly, neither of these two options are usable or practical, and the right answer is somewhere in-between these 2 extremes. But where? DDM in its current form is unable to shed much more light onto this issue. And that’s why DDM is not being used in the cases of the early-stage pre-revenue start-ups.

M-DDM, on the other hand, adds the ability to include all feasible future cash flow scenarios into the valuation calculation (and does it in a mathematically-correct way) – which makes it usable even for the pre-revenue-stage company valuation calculations. 

FAQ6

GoingVC: Couldn’t GPs just run multiple variants of the VC method and assign probabilities to them to create a weighted average valuation?

Vilkelis: This would be at best only a partial improvement over the current form of the VC method, because while it would improve the “risk factor” handling (compared to how the VC method does that now), it would still leave this tool poorly suited for detecting the future unicorns, because it would retain the assumption that all companies, operating in the same niche, have the same magnitude potential (i.e. “unicorns can’t exist”).

 

FAQ7

GoingVC: How do you suggest modeling out the probabilities and estimated cash flows, especially for start-ups that often are pre-revenue and therefore pre-cash flow positive? Do you actually need to do that?

Vilkelis: The only cash flow projections that matter, are for the future stage of the company, at the point in time when it has become sustainably profitable and has achieved (or is close to achieving) its full potential.

Here is why: early-stage start-up investors don’t invest in pre-revenue start-ups, because they intend to start immediately collecting dividends from the company’s profits. Instead, they invest because they hope to materialise large capital gains on the value of the company at least a few years in the future (or more specifically, at the time of the “liquidity event”, at which point the new buyer’s decision what price to pay, will be based entirely on the company’s future performance expectations, derived from the latest-available data).

Furthermore, in its simplest form, M-DDM states that the current valuation of an early-stage company = [the future best-case potential valuation] x [the probability of that best case becoming a reality] x [the time discount factor, based on the Required Rate of Return and on how many years it’s estimated to take to reach that “best-case outcome”].

This means that the current cash flow (and its near-term realistic projections) serves principally only one function: as a proxy providing some indications regarding the probability of “the big future success” (important to emphasize again: probability, not magnitude).

Also, the early-stage data, most-relevant for predicting the future success probabilities and the cash flow magnitudes, tends to be of a much-more-granular nature than the aggregate sales figures. For example, “the price of the company’s product that the market will bear” is an important indication of the future profitability, while “how much money has been generated so far by selling the product at that price” during the pre-scaling stage, can be quite meaningless as far as assessing the product’s future success is concerned, because it is far from guaranteed to accurately reflect the company’s scale-up potential, e.g. (1) early-stage sales revenue can be artificially inflated by selling the product at a significant loss, or (2) the processes used in making low-volume production and sales might end up being not sufficiently scalable. On the other hand, the future (post-scale-up) per-unit production costs, usually can be estimated with a fairly high degree of accuracy.

In other words, using the early-stage revenues for calculating the early-stage valuations can be misleading or even harmful, while even very-low-volume granular tests can reveal a fairly accurate picture – if done properly.

The above basically draws the important distinction between the “product-market fit” (which is a rather fundamental feature that oftentimes can be impossible to remedy if it’s wrong) vs. the “production and sales scaling-up” (which is the operational feature, because it can be increased quickly and dramatically with the help of the extra money, provided that the “product-market fit” is fine).

And then there are other non-revenue factors that have a major impact on the future success probability: a strong brand, the robustness of the business development strategy (including its effectiveness at addressing the foreseeable future challenges), developing competitive moats, strength of the team, etc.

One last important point: the “terminal” (i.e. “mature stage”) cash flows are considerably easier to predict than quarterly cash flows during the earlier stages, because the “mature stage” cash flows are determined predominantly by the fundamental factors like the size of the Total Addressable Market, the realistically-achievable wholesale and retail price points of the offer, etc. (which are easier to calculate than many of the transient factors), while the transient cash flow is much more fickle, and will be quite heavily affected by the future events, many of which can’t be predicted beforehand with any degree of confidence or accuracy. It’s like the difference between the accuracy of predicting the climate 5 years from now vs. predicting the daily weather for each day for the next 5 years. 

FAQ8

GoingVC: So is it safe to say that the actual value of the cash flows in the model is less important than the comparative outputs (i.e. it’s the degree of magnitude between the cases and the estimated discount rate that imply an appropriate level of investment) — making this more of a risk tool than a valuation tool?

Vilkelis: The Terminal Valuation is determined by the fundamentals of the company’s business model (the size of the target market, the profit margins, the moats), because most of the execution risks are now “in the past”.

The biggest source of uncertainty in the early stages of the company’s development, on the other hand, are the still-in-the-future risks, sources of which are mostly of non-quantitative nature, esp. of the sort “what kind of things could happen or are happening?” and “why so?”

For these reasons, the aggregate values of the transient cash flow figures during the early stages of the company’s development are considerably less predictive of the company’s “liquidity event” future valuation, compared to when the company is in a much more mature and steadily-profitable stage. Therefore, relying excessively on the revenue figures when calculating the early-stage valuations, can be misleading, or potentially even harmful.

 

FAQ9

GoingVC: Are you advocating for concentrated portfolios still? Even if you have a more appropriate discount rate, wouldn’t it still be high enough and the failure rate therefore high enough that you would need to diversify to rely less on that lottery effect (i.e. portfolios too small rely on a single winner)?

 

Vilkelis: Investors need to simultaneously increase the success probability of each portfolio company and increase the number of companies, because this is still a game of probabilities due to the fundamental fact that the future can never be predicted with absolute certainty, especially when it comes to start-ups. Jointly increasing both factors will increase the overall probability of success. But at the same time, as I mentioned earlier, the minimum viable portfolio size depends heavily on the success probability; and under the currently-prevailing success probabilities, that minimum viable portfolio size is so big, that very few (if any?) early-stage investors are able to reach it.

 

FAQ10

GoingVC: Would you recommend the M-DDM the sole valuation technique or in collaboration with other traditional VC valuation methodologies?

Vilkelis: M-DDM is intended to be the replacement of the currently-used methodologies (because those have fundamental flaws, making them poorly-suited for the tasks they are used for). Adding erroneous data to the good data set degrades the quality of the whole set, therefore that’s not a good idea. 

The reason why Berkus, Scorecard, etc. have been used up to now, is because Angel investing would be impossible if a specific valuation figure couldn’t be produced by whatever means – whether those means were mathematically valid or not.

It is an integral part of the human nature to always come up with “a method” to handle the problems that must be handled somehow. The initial choices of those methods often are wrong, but sooner or later, the correct solution is found, which then completely replaces the previously-used methods. Not that long ago, people believed that fever was caused by the “heat in the blood”, hence blood-letting was used “to drain away the excess heat”. Now we know that fever is caused by infection, and antibiotics are the right way to treat it (when the infection is bacterial), and blood-letting is no longer considered to be a valid medical procedure for managing fever.

So Berkus, Scorecard and all of their derivative models were basically a convention that all players had agreed to abide by, for the lack of any better alternatives. But now the mathematically-correct alternative has finally arrived.

That said, it might be interesting to accumulate more data on how M-DDM’s valuations compare to the ones calculated using the traditional methods.

Also, some elements of the VC method (specifically, how an “average” company in a given market niche performs) could be useful for constructing the “moderate success” scenario, to be plugged into the M-DDM calculations in addition to “the mandatory two” scenarios (“the complete failure” and “the maximum-possible success”).

FAQ11

GoingVC: Is a primary issue with all of this the fact that there just isn’t good data available for private investments? If we had access to more data, couldn’t we more reliably develop forecasts and industry-specific discount rates?

Vilkelis: I’m not sure if good data could become available at all for these purposes, given that it’s often the little things that can make a huge difference. Just imagine, for example, what the history might have been like, had Gary Kildall been in the building when the IBM people showed up in 1980 to ask him to develop an operating system for the upcoming IBM PC (and if a little later Bill Gates had not slipped into the DOS licensing agreement the clause mandating that Microsoft gets paid by IBM a license fee for every IBM PC sold, regardless of whether DOS was installed on it or not)?

Also, by their very nature, unicorns are nothing like “average companies”: Unicorns are not average because they did something different from what the average companies in the same industry sector did. Therefore, development of unicorns cannot be forecasted by using the data derived from the “average” companies.

Let’s also not forget that, at least in the tech sector, the “average” outcome for a company usually is “failure” – which means that the key question is not “how to model average companies”, but “how to create exceptional companies” (i.e. the unicorns).

That said, there are indeed certain elements that, if present, can dramatically increase the probability and magnitude of a company’s future success. Even more interestingly, this “success factors” set is changing and evolving, because “the old formula” (which is centered around the concept of “a singular brilliant idea”) is getting close to exhausting its possibilities, because the number of “simple brilliant ideas” is not infinite, hence, as they get progressively “snapped up”, it becomes increasingly harder to find the remaining available ones; and the investors, who will be the early adopters of the new model, will benefit disproportionately.

The 6 “success factors” for the new unicorn generation is detailed in my article “How to Build Your Own Unicorn Stable”.

Looking for more help on the topic? Subscribe to GoingVC to get VC research, valuation models, investment thesis, and more delivered straight to your inbox.