- Playing with Iris01 Haskell Code.
The general concept is that You have to open the Train.hs file with an editor, change something, save the file, and rerun (i.e. ‘./gotrain’, ./gouse). This was all somewhat magic with DrRacket but here it’s a little more basic. There is a script called ./goscalewhich scales the input data to 0-1 values. This has already been run so you don’t need to run it again unless you decide to pathologically mess with the values.
I forgot to put in a fancy GUI editor but you can use ‘nano’.
So if you’re in a terminal inside the ‘iris01′ directory simply type,
nano -w Train.hs
Near the top you will see, fannDef = [4, 4, 4, 1]
Change that to, fannDef = [4, 8, 1]
(use the arrow keys to move around, and backspace to delete).
This removes one hidden layer and puts 8 neurons in the only remaining hidden layer.
Save the changes by typing, <ctrl> O
i.e. hold down the ctrl key and press O (the capital letter ‘oh’).
Note that the most used editor commands are shown at the bottom of the screen. ‘^‘ means hold the control key down. Thus ‘^X‘ means hold the ctrl key down and press the letter X.
To exit the editor simply type ‘^X‘ You should now be back at a ‘bash’ command prompt.
Now when you type, ./gotrain It will use your new modified file, and you can see the new results both in the training (which you just ran) and also in the usage (when you type ./gouse).
Inside Train.hs you may play with the network topology (described above), the epoch count (change from 20000), the desired error, learning rate, and momentum. The learning rate just says how much of the error will be applied to correct the synaptic weights; bigger (1 max) is a faster learning rate. The momentum just says how fast can we change weights from their present value to a new value; bigger (1 max) means that weights can change faster.
Remember ‘--‘ is a Haskell comment.