After a **Portfolio's Core Position**, it was time to elaborate on methods to execute its conclusion.

First, the main equations that will guide this scenario.

It was shown that the expression: $\, n \cdot \bar x \,$, the output of a portfolio, could be a replacement for the payoff matrix. It could be broken down into two parts: $ \, n \cdot \bar x \, = (n - \lambda) \cdot \bar x_{_+} + \, \lambda \cdot \bar x_{_-} \, $ with $\, \lambda\,$, the number of losing trades somewhere between 0 and $n$: $\,\, 0 \leq \,\lambda \, \leq n$, and where $\, \bar x_{_-} \, $ and $\,\, \bar x_{_+}\, $ representing respectively the average loss per losing trade and average profit per trade from winning trades.

Next, a trading unit was set as a dollar amount per trade: $u = p \cdot q$. Then was shown that: $\,\mathbf{E}[u \cdot PT] = u \cdot PT $, which generated a more significant relationship: $\,\mathbf{E}[\,n\cdot \bar x] \,= u \cdot PT \cdot n$. This terminated with the desired protocol to increase long term portfolio expectations:

$$\mathbf{E}[\,n\cdot \bar x] ^\uparrow \,= \, u \cdot(1+f_t(a)) \cdot PT \cdot(1+f_t(b)) \cdot n \cdot(1+f_t(c))$$where $f_t(a) \geq 0$, $f_t(b) \geq 0$, and $f_t(c) \geq 0\,$ would control the output. This would give an enhanced version of the payoff matrix, implicitly saying you can do better than averages.

What these equations say is: one can consider a trading methodology using equations as the foundation to a trading system. And because it would be based on an equal sign, it might be more justifiable since the payoff matrix might have an estimated value. The implications could be far-reaching.

Another way to look at the problem would be: the expected value of an enhanced payoff matrix could be controlled.

$$\mathbf{E}\Bigl[\,\displaystyle {\sum (H\,.^*\Delta P)}\,\Bigr] ^\uparrow \,= \mathbf{E}[\,n\cdot \bar x] ^\uparrow \,= \, u \cdot PT \cdot n \cdot (1+f_t(a)) \cdot(1+f_t(b)) \cdot(1+f_t(c))$$where the payoff matrix expectation is translated in terms of trading units, profit targets, number of trades, and time scaling functions.

Other functions could be added, for instance, leverage which could be under program control: $(1+ f_t(Lev))$. A trading unit enhancer for when one needs to increase or decrease volume rapidly: $(1+ f_t(enh))$. We would now get something like:

$$\mathbf{E}\Bigl[\,\displaystyle {\sum (H\,.^*\Delta P)}\,\Bigr] ^\uparrow \,= \, u \cdot PT \cdot n \cdot (1+f_t(a)) \cdot(1+f_t(b)) \cdot(1+f_t(c)) \cdot(1+f_t(Lev)) \cdot(1+f_t(enh))$$Sure, it is a big equation. **Nonetheless, it is all you have to work with**, and it started with only two numbers: $n \cdot \bar x\,$ which totally explained the output of the portfolio's payoff matrix.

One could set all the time enhancer functions to zero and nothing would have changed, you would get: $u \cdot PT \cdot n.\,$ And, as consequence have the payoff matrix expected value come down to expected market averages.

$$\mathbf{E}\Bigl[\,\displaystyle {\sum (H\,.^*\Delta P)}\,\Bigr] ^\uparrow \,= \, u \cdot PT \cdot n \,\longrightarrow \, \displaystyle {\mathbf{E}\Bigl[\sum (H_M\,.^*\Delta P)\,\Bigr]}$$It definitely is: if you want more, you have to do more.

Usually, when you do a backtest at the portfolio level over extended periods of time, you look at the generated metrics. Stuff like the number of trades ($n$), the average profit per trade ($\bar x$), the number of losses ($\lambda$), the number of wins ($n - \lambda$), the average profit per winning trade ($\bar x_+$), and the average loss per losing trade ($\bar x_-$).

Often consideration is placed on the maximum drawdown when it might be of secondary importance. The object of the game is not to have the least drawdown but primarily to generate the most profits, with a consideration for having the lowest drawdown possible. If you add 5% more in drawdown and add 5% CAGR points to profits, will you take it? I think probably yes. I know I would, any day.

If you need convincing, try it with numbers: $A_0 \cdot \Bigl[ (1+ 0.15)^{30} - (1+0.10)^{30}\Bigr]= 48.76 \cdot A_0 $. Would you sacrifice 48 times your initial portfolio because you are afraid of a 5% more in drawdown? If it was not sufficient, then try this: $A_0 \cdot \Bigl[ (1+ 0.20)^{30} - (1+0.10)^{30}\Bigr]= 219.93 \cdot A_0 $. If you wanted to put in more time, the difference would be even larger.

That is what is at stake: long term CAGR points. We should keep an eye on what is important: not our sensitivity to drawdowns but the output of one's trading portfolio. But whatever, there are ways to reduce probable drawdowns to acceptable levels without sacrificing precious long term CAGR points.

A problem with drawdowns is we know what a portfolio simulation generated, but the future will be something else. All we can say is: the backtest showed a number and we could use it as an approximation, an estimate of what might or might not happen.

I want to de-emphasize the importance put on probable drawdowns. One, you will have them no matter what.

What is left is only a question of size, and unfortunately, there are no tools, except very rough estimates that can give you an approximation. That is why you do backtests: to get an idea of its size. Your backtest is certainly no guarantee of the size of the drawdown you will see going forward.

Even Mr. Buffett over his 50-year career has had 4 major drawdowns in excess of 50%! He is no novice to the game.

We should expect drawdowns, and code to minimize them. Say you expect your trading strategy to have 50% drawdowns. You could reduce this drawdown to 25% simply by allocating only 50% of the trading capital to the trading strategy. $A(t)= \frac {A_0}{2}+ \Bigl[\frac {A_0}{2} \cdot (1+r)^t \Bigr].\,$ Note, that it will also produce half as much as well, but you would have limited the drawdown to half its expected value.

Here is another thing some people often neglect. Drawdowns in dollar amounts grow over time. They might reduce or stay the same percentage wise, but in dollar amounts, they have to grow. For instance, take a portfolio growing according to: $A_0 \cdot (1+ 0.15)^{30}$. Its drawdown is a percentage of this, take a 50% drawdown on year 20, its value would be: $0.50 \cdot A_0 \cdot (1+ 0.15)^{20} = 19.16 \cdot A_0$. And it gets worse if it happens later.

It is the main reason why one should program their trading strategy not only to minimize drawdowns, but in such a way as to limit their impact without sacrificing too much in CAGR terms.

We are looking at the trade problem from its end points. What was the outcome of the trading strategy? We have located metrics that totally describe a strategy's output.

We can design backwards to the origin and find ways to improve on the metrics knowing what needs to be coerced or influenced in the direction we want. We are changing the perspective, the point of view.

Usually, a strategy developer thinks of a way of doing things. Programs it, and then looks at the metrics to see if his trading strategy has value or not. He will observe the same metrics we do, but will simply accept them as the strategy's signature. That was the output, and those were the metrics, a by-product that gives information about what was or what happened during the simulation.

Whereas in this methodology, we start with the metrics we want to influence and then build a trading strategy to respond to them. We set the metrics from the start that we want to see at the end of the simulation and see if we have reached them or not.

We know we have to influence: $ u \cdot PT \cdot n.\,$ They are the 3 numbers that give a total picture. I double the trading unit $u$ and I have double the outcome. I double the number of trades $n$ and I double the outcome. I double the profit target $PT$ and I double the outcome.

My job becomes in finding ways to make it happen.

You are supported by this equation. You know from the start what to influence to reach the goals and it becomes the degree of trade aggressiveness that will have impact over the final outcome.

Do you up the ante or not? That will be the question.

In the series of articles **A Simple Stock Trading Strategy: Part I, Part II, and Part III.** was given a moving average crossover system based on a rotation system found on Quantopian. Some would not even recognize my ability to increase performance with what amounted to really minor parameter changes to the code.

Just changing a number here and there. Numbers I knew would have an impact. Evidently, I was not surprise to see the performance level increase as I changed the metrics of the game. I did not change any of the code's logic even if it had some deficiencies; only its pivotal decision parameters.

One, I upped the trade unit 66%. Impact: $ 1.66 \cdot u \cdot PT \cdot n.\,$ Next, I increased the number of trades. This was done in two ways. I allowed more stocks to be traded at a time, and increased the number of potential candidates which resulted in a significant increase in trades. Then, I allowed the profit target to execute more often which fed the strategy allowing it to buy more shares and make more trades. This creating a positive reinforcing feedback loop. You made more money because you traded more, and you traded more because you made more money. Simply recycling the cash reserves.

I also added leverage which I set at 1.85. The reason for the leverage was simple, there was an automatic stop loss built-in due to the total exit following the index's moving average cross under. Therefore, no coded one was required. This index switcher would by-pass any major down swing by going entirely to cash or cash equivalents. No fear of major drawdowns.

Overall impact: $ 1.66 \cdot u \cdot 1.40 \cdot PT \cdot 4 \cdot n \cdot 1.85\, = 17 \cdot u \cdot PT \cdot n.\,$ There was no surprise there. Only the application of simple directives. It improved the trading strategy over 17-fold. Could anybody do that? Yes, absolutely, anyone could.

We are talking about the same trading strategy, the same code, the same trading logic, except that in my case the payoff matrix had for output: $17 \cdot u \cdot PT \cdot n.\,$ instead of just $\, u \cdot PT \cdot n.\,$ The author of the original program did not like to use leverage, it is a choice, then the outcome would be something less: $ < 9.3 \cdot u \cdot PT \cdot n.\,$

Note, one could obtain the same ending value simply by adding a capital equivalent instead of using leverage. This would reduce the CAGR level by a little less than 20%. There is always a price to pay to achieve higher results.

Lastly, I added the $f_t(enh)$ enhancer to the mix (also just one number). It's a simulation I interrupted before it finished, did not show any output, deleted the trading strategy on Quantopian and have not touched it since. Don't worry, I kept a copy.

These program changes were chronicled live on Quantopian, see: https://www.quantopian.com/posts/a-weekly-view-of-a-simple-momentum-rotation-system-for-stocks

Whatever trading strategy you have. You have three numbers from your metrics that will not only describe your trading strategy, but will also quantify it: 1) the number of trades it can make, 2) your bet size (trade unit), and 3) your acceptable or realizable profit margin. That is all that counts literally. Everything else is of no consequence, part of features, preferences, or descriptive properties of your trading strategy. Evidently, if you can not get $\bar x > 0$, I would suggest to forget trading altogether, or change strategy.

You need to concentrate on how to make more trades, how to increase your trading unit, and how to increase your profit margin. Not just on one, but on the thousands and thousands of trades your program can make. This can be done whatever your original program was, and thereby increase its performance level, increase the overall return to much higher than what you thought might be possible or accustomed to. So, why don't you do it? **Three numbers**, two of which, you set yourself.

© 2016, November 1st. Guy R. Fleury