Archive for the ‘ deep learning ’ Category

How Google Photos finds trees?


When a friend posted on facebook that if he types alligator in his Google Photos search page, it can find all the photos of the alligators in his images, even the ones which are not tagged. I was intrigued …

If you want to learn how it does work go to last section of this blog. I have detailed the links, both for general reading and technical one. Let me just dazzle you with some results here.

So I searched for trees in my photos,

IMG_20141107_111758710_HDR IMG_20150103_122857248_HDRIMG_20141106_120345930_HDR

It was even able to find few looking like trees, for example following from Museum of Fine Arts exhibition

DSCN2797  Which to me made sense, one can look at the texture, shape and colors. Thus inferring that some image has a tree in it. They might have added some context too, if there is a horizon etc.. (I already knew that they are using CNN, this was me trying to make sense of if I did not knew that they were using CNN).

What really surprised me were results below. One on the left has no color, none of the tree like texture and very thin tree like structure. On right there is an image in which tree has been made blue (it’s art installment where trees were colored blue to represent veins carrying oxygen), so it cannot use the color cues.

IMG_20150103_125706516_HDR

Gainesville Blue Trees art installment

Gainesville Blue Trees art installment

That made me think might be they using little more than just image cues, they might be using similarity among images and see if there is any other similar image, that has been labeled, has been tagged by someone or has some caption or description. Which in this dan and age is fair and smart thing to do.

I remembered a very old paper, I think it was from UCF Computer Vision Lab (I think one of the students of Mubarak Shah), where they were trying to distinguish between the grass, shrubs, trees, … and it was not an easy task. So I experimented with it. Next thing I searched was “grass” and yes results were quite different. Although it was not as accurate, however it did made sense.

IMG_20141106_120345930_HDR DSCN2711 IMAG0046 (1)R2 IMG_20141108_174249930_HDR

Terms like “food” give the most worse result. However, more defined objects give much better result, for example it was able to find “cycle” that was not even the main part of the image, similarly results for car and airplane were good. For searches on terms “Chair” it did an interesting thing. It found people in sitting pose, most of these were people who were sitting on Sofa or on the ground, but it made an association of human pose with the concept of “chair”.

How Magic works? (aka how google can do these amazing things?)

If you want to have a Google’s view of how the ‘magic’ works have a look at this blog http://googleresearch.blogspot.com/2013/06/improving-photo-search-step-across.html. If you are looking to learn something out of it, have a look at Freebase, CNN, Dr. Hinton…. and that their final layer is just linear classifier. If you are interested in getting some technical know-how, have a look at Dr. Hinton’s paper ImageNet Classification with Deep Convolutional Neural Networks“.

However, we know things have moved at a quite fast pace, about a year ago there was an announcement by google that they can now provide “natural description of images” http://googleresearch.blogspot.com/2014/11/a-picture-is-worth-thousand-coherent.html.  Technical stuff to look for  RNN (recursive neural network), how they are using for machine translation, have a look their paper, etc….

Advertisements

Face Recognition by Yima and Features of Andrew Ng’s recent work.


Was thinking in terms of Andrew Ng’s “Building High-level Features Using Large Scale Unsupervised Learning (NIPS 2012)  and the Yima’s Robust Face Recognition (code present my other blog)

What could be benefits of using the features coming from the Andrew Ng and explicitly modeling them using the Sparse dictionary learning. Definitely one cannot use the the Dictionary as done by the Yima, since that is not feasible for huge amount of data and people. So will the features coming from the Andrew Ng’s work provide the robustness when used for the dictionary learning and then the coding?

Or Group Sparse coding and Block Dictionary learning could be used to better model the network itself, thus reducing the complexity and time required to train the network?

Just a thought.

NIPS 2012: Multimodal Learning with Deep Boltzmann Machines


This is quite interesting paper from from the Ruslan (Toronto University) ( project page: http://www.cs.toronto.edu/~nitish/multimodal/,  video-lecture http://videolectures.net/nips2012_salakhutdinov_multimodal_learning/) [they used Gaussian RBM while making DBM]

Interesting interms of application and how the DBM is used.

multi modal DBM

In this way they can use it given one set of features to find others. I will recommend watching the video lecture.

Vision and Deep Learning in 2012


This entry is an effort to collect important Deep Learning Papers that were published in 2012 especially related to computer vision.

There is general resource http://deeplearning.net/ but not a good resource that collects the papers in Deep Learning w.r.t to Computer Vision problems.

General Resources 

Interesting Papers 

getting into Deep Learning


Yes, the fever has reached me also 🙂 and I have decided to look into the deep learning. Some interesting papers if you want to have a look

    • Training Products of Experts by Minimizing Contrastive Divergence, Geoffrey Hinton
    • Deep Boltzmann Machine; Salakhutdinov, Hinton; Proceedings of the international conference on artificial intelligence and statistics, 2009. Knowing and understanding following concepts will help reading this paper
      • Boltzmann Machine and RBM (reading Product of Experts is highly recomended)
      • Annealed Importance Sampling (AIS) (Neal 2001) or have a look at “Importance Sampling: A Review” by Tokdar and Kass
      • Mean Field as used in Variational Inference. (Wikipedia page is quite helpful)

Reading is good, driving equations is better.

Another good read for beginners is

From Neural Networks to Deep Learning http://xrds.acm.org/article.cfm?aid=2000787

  • Very interesting point made by the Jef Hawkins (author of On Intelligence and founder of Numenta)

It requires a temporal memory that learns what follows what. It’s inherent in the brain. If a neural network has no concept of time, you will not capture a huge portion of what brains do. Most Deep Learning algorithms do not have a concept of time