Posts

Showing posts from July, 2017

Optical Illusions and Machine (Vision) Learning

Image
The Coffer Illusion The ABC had an interesting article on optical illusions and the workings of the brain today. The Coffer Illusion (above) is really spooky!   Looks like rectangles right? Look longer enough and another shape appears (circles). Took some time with me originally, page but once I could see them I could come back even hours later and they would be almost immediately apparent.  It's like your brain has been "programmed" with the circles so you see them again.  Can I ever un-see them??? TODO come back in a week and check :-) So the question I had was given that these sort of optical illusions tell you something about how the brain works, do they tell you anything about Machine Vision? For example, could ML algorithms be trained to recognise either (or both) rectangles or circles in this image? How about optical illusions that require switching between recognising one object or another?  E.g .this one which has a single face, or 2 faces kissing? Thi

“Im a scientist, once I do something, I want to do something else.”

Image
“I'm a scientist, once I do something, I want to do something else.” ―  Clifford Stoll Recently I was trying to work out why I get bored doing the same thing twice. I.e. Once I've solved a problem that's great but I want to move onto something else. Turns out that I'm not the only one suffering from this peculiarity.   I was reminded of Clifford Stoll's quote recently by this article: https://medium.com/the-mission/in-1995-this-astronomer-predicted-the-internets-greatest-failure-68a1c3927e46  Stoll was famous (the last time I'd heard of him) for catching hackers, but it turns out he's also famous for astronomy and predictions about problems with "free" information in the internet - He said the internet is " a wasteland of unfiltered data". The problem was bad enough before commercial entities were allowed internet access in newsgroups but he predicted it would get worse once everyone was connected - it did. I met him briefly a

Mega Trends - Digital Twins?

Image
Image: Australian Mega Fauna Recently I was asked about what I thought about the impact of ICT Mega Trends on technology/business strategy.  The 1st question is what are "Mega" trends? My guess is that they are the new buzz words coined in the last 6 months which will be replace equally as quickly?  Just like Mega Fauna?! The latest Garner mega trends is here.   Most of these are obvious but a couple caught my attention: Digital twins and Meshes. Trend No. 5: Digital Twin Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g.

Down the Lambda (Functions) Hole!

Image
Further adventures Down the Lambda (Functions) Hole... I decided to learn Python for Data Analytics recently so jumped into the language for a few days. I soon noticed that it support Lambda Functions. Cool, they've been around for a ages. Alonzo Church invented Lambda Calculus in the 1930's, and this influenced languages like LISP (which I learnt years ago but soon replaced with Prolog as my preferred AI language). So the Clever Thing about Lambda Functions is that they are anonymous (no name), and are used in conjunction with Higher Order Functions (Functions that can take Functions are arguments and return Functions). A number of questions came to mind in relationship to Data Analytics, AWS, languages etc: 1 Is Python the best/only DA language to support Lambda Functions? 2 Are they useful/essential for DA? 3 What's the connection to Map/Reduce (from LISP but now Hadoop etc)? 4 Does Java support Lambda Functions? Well? 5 What good examples are there in fo

Generative adversarial networks (GANs)

Image
Picture from here . I came across  Generative adversarial networks (GANs) recently. These look really interesting as it's a hybrid approach for machine learning using both generative and discriminative learning at the same time. This is the abstract from the paper: Generative Adversarial Networks Ian J. Goodfellow ,  Jean Pouget-Abadie ,  Mehdi Mirza ,  Bing Xu ,  David Warde-Farley ,  Sherjil Ozair ,  Aaron Courville ,  Yoshua Bengio (Submitted on 10 Jun 2014) We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solut