Standard deviation of returns is a way of using statistical principles to estimate the volatility level of stocks and other investments, and, therefore, the risk involved in buying into them. The principle is based on the idea of a bell curve, where the central high point of the curve is the mean or expected average percentage of value that the stock is most likely to return to the investor in any given time period. Following a normal distribution curve, as one gets farther and farther away from the average expected return, the standard deviation of returns increases the gains or losses made on the investment.

In most man-made and natural systems, bell curves represent the probability distribution of actual outcomes in situations that involve risk. One standard deviation away from the average constitutes 34.1% of actual results above or below what is the expected value, two standard deviations away constitutes an additional 13.6% of actual results, and three standard deviations away from the average constitutes another 2.1% of results. What this means in reality is that, when an investment does not return the expected average amount, about 68% of the time it will deviate to either a higher or lower level by one standard deviation point, and 96% of the time it will deviate by two points. Almost 100% of the time, the investment will deviate by three points from the average, and, beyond this, the growth in the level of loss or gain for the investment becomes exceedingly rare.

Probability predicts, therefore, that an investment return is much more likely to be close to the average expected return than farther away from it. Despite the volatility of any investment, if it follows a standard deviation of returns, 50% of the time, it will return the expected value. What's even more likely is that, 68% of the time, it will be within one deviation of the expected value, and, 96% of the time, it will be within two points of the expected value. Calculating returns is a process of plotting all of these variations on a bell curve, and the more often they are far from the average, the higher the variance or volatility of the investment is.

An attempt to visualize this process with actual numbers for the standard deviation of returns can be done using an arbitrary return percentage. An example would be a stock investment with an expected average return rate of 10% with a standard deviation of returns of 20%. If the stock follows a normal probability distribution curve, this means that, 50% of the time, that stock will actually return a 10% yield. It is more likely, however, at 68% of the time, that the stock can be expected to lose 20% of that return rate and return a value of 8%, or gain an additional 20% of return value and return an actual rate of 12%. Even more likely overall is the fact that, 96% of the time, the stock can lose or gain 40% of its return value for two deviation points, meaning it would return somewhere between 6% and 14%.

The higher the standard deviation of returns is, the more volatile the stock is both for increasing positive gains and increasing losses, so a standard deviation of returns of 20% would represent much more variance than one of 5%. As variance gets farther away from the center of the bell curve, it is less and less likely to occur; however, at the same time, all possible outcomes are accounted for. This means that, at three standard deviations, almost every possible real-world situation is plotted at 99.7%, but only 2.1% of the time does an actual return on an investment fall three deviations away from the average, which, in the case of the example, would be a return of somewhere around 4% or 16%.