An old Machine Learning Algorithm




The last Machine Learning Algorithm I worked on at UNSW for my PhD research was this one in the last 1980s:

Brebner, P., "Weka: An Algorithm for Learning Complex Concepts from Few Examples", School of Electrical Engineering and Computer Science, University of New South Wales, DCS report, 1989.

Coincidentally Weka is also a well known repository of ML algorithms from my original university but from after this date (Waikato, NZ). 

I guess there aren't than many interesting NZ birds to choose names from!This was the last of a series of algorithms I developed for learning first-order concepts from relational data in an autonomous robot learning domain (with temporal logic).  It also appears that I ran out of names of computers from Hitchhikers guide to the galaxy which I had been using up until then: Gargantubrain - An heuristic algorithm for learning non-recursive Horn Clauses from Positive and Negative Examples; Deep Thought: An Algorithm for Learning Causal Laws; "Active Experimentation with Omni-Cognate".

Galatea the name of my 1st 1st learner from Waikato/MSc.


This one was different to the previous ones which were primarily discriminant learners. All they had to do was pick the correct category of prediction (e.g. if you do this action in a world with this subset of state you'll get this effect, and not any other effect).  This approach to learning worked well when were there large numbers of negative examples to prevent over generalization (false positives). The learned concepts started as the universally general rule (i.e. everything happens) and needed to be made more discriminant (specific) as negative examples were encountered. 

The weakness of this approach was that if you didn't have large numbers of negative examples, and/or (related) if the concepts to be learned were "complex" then the concept was invariable overly general all the time. This can have bad side-effects if you are an active learner in a dangerous world. (crossing the road with your eyes closed may work some of the time but not forever).

The solution I tried (from memory) was a hybrid approach with 2 sets of heuristics working at once. The default heuristic is to specialise the current concept when negative examples are observed (maximise inter category differences). The 2nd heuristic attempted to maximum similarity within the positive examples of the category (maximise intra-category similarity). This is trickier to do but does mean you can also use the learned concept in a generative way to produce more representative positive examples on demand. From memory I used this to generate new musical scores of a particular type of music (e.g. Bach's music was the set of positive examples, Mozart's music was the negative examples). This approach was also better at picking on larger structures (that are really important for music, otherwise it doesn't sound very interesting, just random notes). In hindsight I was inspired by work I'd also done on clustering over 1st order/relational data. 

A lot of deep machine learning algorithms can now be used generativity so they must be characterisation rather than discrimination focussed. 

This book covers the 2 approaches: 
Machine Learning: Discriminative and Generative

By Tony Jebara

Yes, does look as if many current algorithms are generative. 

And this course outline.

Do any of the current generative algorithms work over 1st order relational data? (e..g graph data)?

Maybe. Some blogs. A PhD

Actually just realised that I've used generative modelling techniques in the last 10 years, I just hadn't thought of them as ML (guess they weren't, they were more Markov alg based to produce bigger/different networks based on a sample of a current network, where networks were models of the performance of distributed systems including workloads, software, networks, and servers etc.

What's the connection therefore (and future research areas?) between generative modelling and learning of graph networks???

Comments

  1. Want to change your career in Selenium? Red Prism Group is one of the best training coaching for Selenium in Noida. Now start your career for Selenium Automation with Red Prism Group. Join training institute for selenium in noida.

    ReplyDelete

Post a Comment

Popular posts from this blog

Which Amazon Web Services are Interoperable?

AWS Certification glossary quiz: IAM

AWS SWF vs Lambda + step functions? Simple answer is use Lambda for all new applications.