· learning

Learning: Switching between theory and practice

In one of my first ever blog posts I wrote about the differences I’d experienced in learning the theory about a topic and then seeing it in practice.

The way I remember learning at school and university was that you learn all the theory first and then put it into practice but I typically don’t find myself doing this whenever I learn something new.

I spent a bit of time over the weekend learning more about neural networks as my colleague Jen Smith suggested this might be a more effective technique for getting a higher accuracy score on the Kaggle Digit Recogniser problem.

I first came across neural networks during Machine Learning Class about a year ago but I didn’t put any of that knowledge into practice and as a result it’s mostly been forgotten so my first step was to go back and watch the videos again.

Having got a high level understanding of how they work I thought I’d try and find a Neural Networks implementation in Mahout since Jen and I have been hacking with that so I have a reasonable level of familiarity with it.

I could only find people talking about writing an implementation rather than any suggestion that there was one so I turned to google and came across netz - a Clojure implementation of neural networks.

On its project page there were links to several 'production ready' Java frameworks for building neural networks including neuroph, encog and FANN.

I spent a few hours playing around with some of the encog examples and trying to see whether or not we’d be able to plug the Digit Recogniser problem into it.

To refresh, the digit recogniser problem is a multi class classification problem where we train a classifier with series of 784 pixel brightness values where we know which digit they refer to.

We should then be able to feed it any new set of 784 pixel brightness values and it will tell us which digit that is most likely to be.

I realised that the OCR encog example wouldn’t quite work because it assumed that you’d only have one training example for each class!

SOMClusterCopyTraining.java * For now, this trainer will only work if you have equal or fewer training elements * to the number of output neurons.

I was pretty sure that I didn’t want to have 40,000 output neurons but I thought I better switch back to theory and make sure I understood how neural networks were supposed to work by reading the slides from an introductory talk.

Now that I’ve read those I’m ready to go back into the practical side again and try and build up a network a bit more manually than I imagined the previous time by using the BasicNetwork class.

I’m sure as I do that I’ll have to switch back to theory again and read a bit more, then code a bit more and so the cycle goes on!

  • LinkedIn
  • Tumblr
  • Reddit
  • Google+
  • Pinterest
  • Pocket