Background
After reading all the documentation about all the different functions available, you may be feeling a little overwhelmed. That is ok. Every neural network training formula sets settings at the beginning (so that they are set before training commences) then you specify your inputs and desired outputs to the neural network. At the end of the formula is code that resets the WiseTrader Toolbox internal memory back to its defaults. These three sections are how most of your neural networks formulas will look.
If you are using the Walk-Forward version of the toolbox, then the training and indicator formulas will be in the one file because as mentioned earlier training and prediction happens in real-time. If you are using the Standard Neural Networks, then you will have a training formula that will train the neural network and a separate neural network that will run the trained neural network. The training formula should have the neural network settings then training code and finally clean up code at the end just as described earlier. The code that runs the neural network does not need any of the settings but it still requires the same clean up code and only the inputs to the trained neural network. The actual training and running code will be described in the tutorials themselves.
Tutorial 1 - Predicting
To train or run any of the examples in the tutorials one just needs to open the formula in the editor and go to Tools -> Code Check & Profile and Amibroker will execute the AFL.
Predicting using a Walk-Forward Neural Network
There are many ways to predict something like the price or an indicator. In this tutorial we will look at the different approaches so that you can see all the different ways it can be done. The simplest way to do a prediction would be to use some past data to predict the future using NeuralNetworkIndicator9 (Walk-Forward neural network ) so that we have 8 inputs (the past data) and one output (the predicted value we want). In this example we will try to predict the next days close or ref(C, 1), which is the close 1 day into the future, using the current close and the past 7 closes. All the NeuralNetworkIndicator functions also require a LookBack value, UniqueID and how far into the future the output is. The resulting AFL to train the neural network and plot the prediction will look as follows (Note that if everything works you should see a progress dialog box):
SetBarsRequired(99999, 99999);
i1 = C;
i2 = Ref(C, -1);
i3 = Ref(C, -2);
i4 = Ref(C, -3);
i5 = Ref(C, -4);
i6 = Ref(C, -5);
i7 = Ref(C, -6);
i8 = Ref(C, -7);
O1 = Ref(C, 1);
res = NeuralNetworkIndicator9(i1, i2, i3, i4, i5, i6, i7, i8, O1, FullName(), 100, 1);
Plot(res, _DEFAULT_NAME(), colorRed, styleLine);
EnableProgress();
RestoreDefaults();
ClearNeuralNetworkInputs();
Note that SetBarsRequired is needed so that the NeuralNetworkIndicator can retrieve the cached value properly and also only compute for the latest bar. The 3 function calls at the end of the formula restore any settings that may have been changed during the execution of the formula so that any other formula that uses the default values is not affected. The above method of predicting will retrain and predict on each bar if this is undesirable one can also train a neural network and save it to a file using TrainNeuralNetwork9 then running it using RunNeuralNetwork9.
Training and predicting using a Standard Neural Network
The AFL below trains the neural network and saves the resulting neural network to a file with the same name as the stock code of the current stock:
SetBarsRequired(99999, 99999);
i1 = C;
i2 = Ref(C, -1);
i3 = Ref(C, -2);
i4 = Ref(C, -3);
i5 = Ref(C, -4);
i6 = Ref(C, -5);
i7 = Ref(C, -6);
i8 = Ref(C, -7);
O1 = Ref(C, 1);
TrainNeuralNetwork9(i1, i2, i3, i4, i5, i6, i7, i8, O1, FullName());
EnableProgress();
RestoreDefaults();
ClearNeuralNetworkInputs();
Once the above neural network has been trained the following AFL will run it from file and display it:
SetBarsRequired(99999, 99999);
i1 = C;
i2 = Ref(C, -1);
i3 = Ref(C, -2);
i4 = Ref(C, -3);
i5 = Ref(C, -4);
i6 = Ref(C, -5);
i7 = Ref(C, -6);
i8 = Ref(C, -7);
O1 = Ref(C, 1);
res = RunNeuralNetwork9(i1, i2, i3, i4, i5, i6, i7, i8, FullName());
Plot(res, _DEFAULT_NAME(), colorRed, styleLine);
EnableProgress();
RestoreDefaults();
ClearNeuralNetworkInputs();
You will notice that both of the above will yield mediocre results. Firstly, predicting the price is extremely hard and secondly the input and output data is not uniform. Even though the neural networks in the toolbox scale all data input to them they cannot learn a good pattern when the data is not distributed evenly. The best way to solve this problem is to feed the rate of change instead of the absolute price as the input and output and then just add this predicted rate of change back onto the previous price. In the next example we will use the multi input neural network to show how that can also be used and how it can simplify adding many inputs with just the lag value being different.
SetBarsRequired(99999, 99999);
SetMaximumEpochs(1000);
AddNeuralNetworkInput(ROC(C,1), 0);
for(i = 1; i < 10; i++)
{
AddNeuralNetworkInput(ROC(Ref(C, -1 * i), 1), 0);
}
//Desired output
AddNeuralNetworkOutput(ROC(Ref(C, 1), 1), 0);
TrainMultiInputNeuralNetwork(FullName());
EnableProgress();
RestoreDefaults();
ClearNeuralNetworkInputs();
To run the trained neural network and add the predicted rate of change back onto the price we need to use the following formula:
SetBarsRequired(99999, 99999);
AddNeuralNetworkInput(ROC(C,1), 0);
for(i = 1; i < 10; i++)
{
AddNeuralNetworkInput(ROC(Ref(C, -1 * i) , 1), 0);
}
res = RunMultiInputNeuralNetwork(FullName());
increaseFactor = 1 + res / 100;
prediction = C * increaseFactor;
Plot(Prediction, _DEFAULT_NAME(), colorBrightGreen, styleLine);
EnableProgress();
RestoreDefaults();
ClearNeuralNetworkInputs();
That's it for this tutorial and on prediction. You will notice that the above examples tend to predict the next day's close very close to today's close. This is because predicting the next day's price is extremely hard and one needs to find the correct inputs for the neural network like indices of international markets or even economic data. One can also try to increase the number of epochs for training or even increase the neural network complexity by adding more hidden layers and neurons.