Posts Tagged ‘ 2011 ’

Questioning Sparsity


Went through Rigamonti’s CVPR 2011 paper “Are Sparse Representations Really Relevant for Image Classification?” {Rigamonti, Brown, Lepetit}

Recently there being quite a lot of papers on the Sparsity, above is a very valid question. They report lot of experiments and compare many different techniques. Their conclusion is sparsity in important while learning Feature Dictionary but not helpful during classification. Although only thing it was able to convince me was that might be in their setting the convexity is not working.

Looking forward to see rebuttals or papers questioning or answering questions raised by Rigamonti; in coming year. Overall this appears to be paper that will be cited quiet a lot.

Anchors and Cluster Centers


Went through A Probabilistic Representation for Efficient Large Scale Visual Recognition Tasks  (Bhattacharya, Sukthankar, Jin, Mubarak Shah) CVPR 2011. It has good results but basically they are trying to find weights to fit mixture of gaussians (each centered around a selected feature vector).

That’s what my understanding is ……

Instead of doing the clustering to find the words of dictionary, they randomly select the Features from the dataset and call them ‘Anchors’. These ‘Anchors’ do same job as words afterwords. Instead of just matching one feature with only one word, they try to get the weight on each word; that is for each given image, they get K features, each feature can say how important each ‘Anchor’ is, that makes the weight ‘w’ vector. They find weight vector through maximum likelihood estimator. Now when they have weight vector for each image, they do the SVM for classification.

Their results are good but I still have questions about how they know how many Anchors to randomly pick and every time they will run their experiment their results will be different because their Anchors have changed.

But again they have done extensive experiment. Should have a look at their experiment section.

CVPR 2011, interesting Papers


Few of the papers looking interesting