Archive for the ‘ Research ’ Category

Manifolds Manifolds and Manifolds


how to define manifolds and how to say what is manifold……. ha!

today in one of the books I read this line by Elie Cartan,

“The general notion of manifold is quite difficult to define with precision”

that says all !

Have a look at “A panoramic view of Riemannian geometry” By Marcel Berger

Sparsity, Sparsity, Sparsity …… Some interesting papers


  1. Jacob, L., Obozinski, G., & Vert, J.-P. (2009). Group lasso with overlap and graph lasso. Proceedings of the 26th Annual International Conference on Machine Learning – ICML ’09, 1-8. New York, New York, USA: ACM Press. doi: 10.1145/1553374.1553431.
    1. http://videolectures.net/icml09_jacob_glog/
    2. www.machinelearning.org/archive/icml2009/papers/471.pdf
  2. D.M. Witten, R. Tibshirani, and T. Hastie, “A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis.,” Biostatistics (Oxford, England), vol. 10, Jul. 2009, pp. 515-34.
    1. www-stat.stanford.edu/~tibs/Correlate/pmd.pdf
  3. R. Jenatton, G. Obozinski, and F. Bach, “Structured Sparse Principal Component Analysis,” Journal of Machine Learning Research: W&CP, vol. 9, Sep. 2009, pp. 366-373.
    1. http://jmlr.csail.mit.edu/proceedings/papers/v9/jenatton10a/jenatton10a.pdf
  4. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, Feb. 2006, pp. 49-67.
    1. http://www2.isye.gatech.edu/~myuan/papers/glasso.final.pdf
  5. S. Mosci, D.- Universit, A. Verri, D.- Universit, and L. Rosasco, “A primal-dual algorithm for group sparse regularization with overlapping groups,” Advances in Neural Information Processing Systems, 2010.
    1. http://books.nips.cc/papers/files/nips23/NIPS2010_0776.pdf
  6. J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online Learning for Matrix Factorization and Sparse Coding,” Journal of Machine Learning Research, vol. 11, Aug. 2009, pp. 19-60.
    1. http://jmlr.csail.mit.edu/papers/volume11/mairal10a/mairal10a.pdf

Questioning Sparsity


Went through Rigamonti’s CVPR 2011 paper “Are Sparse Representations Really Relevant for Image Classification?” {Rigamonti, Brown, Lepetit}

Recently there being quite a lot of papers on the Sparsity, above is a very valid question. They report lot of experiments and compare many different techniques. Their conclusion is sparsity in important while learning Feature Dictionary but not helpful during classification. Although only thing it was able to convince me was that might be in their setting the convexity is not working.

Looking forward to see rebuttals or papers questioning or answering questions raised by Rigamonti; in coming year. Overall this appears to be paper that will be cited quiet a lot.

CVPR 2011, interesting Papers


Few of the papers looking interesting

Beta Process and Denoising


Studying “Dependent hierarchical beta process for image interpolation and denoising” published in JMLR 2011, by authors Mingyuan Zhou, Hongxia Yang, Guillermo Sapiro, David Dunson, Lawrence Carin. 

More information coming as I read the paper ………………

Car Datasets


I am looking for the Car Detection Datasets, especially rear and front ones.
I have found following few

  1. http://lear.inrialpes.fr/data
  2. http://www.vision.ee.ethz.ch/~bleibe/data/datasets.html#cars-rear
    1. They have Side view of Car and Multiview Car dataset also
  3. http://www.vision.caltech.edu/html-files/archive.html
  4. http://vasc.ri.cmu.edu/idb/html/car/

 

The ones in the 2 and 3 are more good trying to get more datasets, if you have some please send me a link

 

Shape Carving


The paper A Theory of Shape by Space Carving

Still Have To Read

Structure from Motion


Studying the Nonrigid Structure from Motion in Trajectory Space by Ijaz et. al.

Have to give presentation in front of the study group here.

To understand it properly I will recommen reading “A Closed-Form Solution to Non-Rigid Shape and Motion Recovery” by Xiao, Chai, Kanade, IJCV 2006. They properly explains the mathematics of it.

The starting work I think was from Tomasi Kanade (Tomasi Kanade Fractorization ) “Shape and motion from image streams under orthography: a factorization method” 1992,

Background Subtraction using the codebook


I read “Background Modeling And Subtraction By CodeBook Construction” by Kyungnam Kim1, Thanarat H. Chalidabhongse2, David Harwood1, Larry Davis1

Implemented the matlab code for it. 

It is working finer than the Grimson’s Background Subtraction method on the video sequecnes we have. 

Will be uploading the code soon. 


Current Status; Representing haar features in the integral image


About NBS, Dr. Jeff explained to me a basic point I have missed that is while representing the eigen vector as the linear combination of the haar features, we have to convert the haar-feature-basis vectors also to the integral domain.

For example let b1 be representing the haar feature having ones from (1,1) to (10,10). Then its integral representation would be a having

  • 1 at (1,1) & (10,10) and
  • -1 at (1,10) & (10, 1)

In this representation there would be zero every where except on these 4 locations.

Thus in this way the phi (NBS representation of eigen vector) will be summation of integral-represented basis vectors multiplied by the ci’s. Thus phi will also have zeros on many locations making is sparse. Thus when the image is projected over this phi the coefficient will be calculated by small number of additions.