Archive for the ‘ sparsity ’ Category

Face Recognition by Yima and Features of Andrew Ng’s recent work.


Was thinking in terms of Andrew Ng’s “Building High-level Features Using Large Scale Unsupervised Learning (NIPS 2012)  and the Yima’s Robust Face Recognition (code present my other blog)

What could be benefits of using the features coming from the Andrew Ng and explicitly modeling them using the Sparse dictionary learning. Definitely one cannot use the the Dictionary as done by the Yima, since that is not feasible for huge amount of data and people. So will the features coming from the Andrew Ng’s work provide the robustness when used for the dictionary learning and then the coding?

Or Group Sparse coding and Block Dictionary learning could be used to better model the network itself, thus reducing the complexity and time required to train the network?

Just a thought.

Advertisements

Sparsity, Sparsity, Sparsity …… Some interesting papers


  1. Jacob, L., Obozinski, G., & Vert, J.-P. (2009). Group lasso with overlap and graph lasso. Proceedings of the 26th Annual International Conference on Machine Learning – ICML ’09, 1-8. New York, New York, USA: ACM Press. doi: 10.1145/1553374.1553431.
    1. http://videolectures.net/icml09_jacob_glog/
    2. www.machinelearning.org/archive/icml2009/papers/471.pdf
  2. D.M. Witten, R. Tibshirani, and T. Hastie, “A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis.,” Biostatistics (Oxford, England), vol. 10, Jul. 2009, pp. 515-34.
    1. www-stat.stanford.edu/~tibs/Correlate/pmd.pdf
  3. R. Jenatton, G. Obozinski, and F. Bach, “Structured Sparse Principal Component Analysis,” Journal of Machine Learning Research: W&CP, vol. 9, Sep. 2009, pp. 366-373.
    1. http://jmlr.csail.mit.edu/proceedings/papers/v9/jenatton10a/jenatton10a.pdf
  4. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, Feb. 2006, pp. 49-67.
    1. http://www2.isye.gatech.edu/~myuan/papers/glasso.final.pdf
  5. S. Mosci, D.- Universit, A. Verri, D.- Universit, and L. Rosasco, “A primal-dual algorithm for group sparse regularization with overlapping groups,” Advances in Neural Information Processing Systems, 2010.
    1. http://books.nips.cc/papers/files/nips23/NIPS2010_0776.pdf
  6. J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online Learning for Matrix Factorization and Sparse Coding,” Journal of Machine Learning Research, vol. 11, Aug. 2009, pp. 19-60.
    1. http://jmlr.csail.mit.edu/papers/volume11/mairal10a/mairal10a.pdf

Questioning Sparsity


Went through Rigamonti’s CVPR 2011 paper “Are Sparse Representations Really Relevant for Image Classification?” {Rigamonti, Brown, Lepetit}

Recently there being quite a lot of papers on the Sparsity, above is a very valid question. They report lot of experiments and compare many different techniques. Their conclusion is sparsity in important while learning Feature Dictionary but not helpful during classification. Although only thing it was able to convince me was that might be in their setting the convexity is not working.

Looking forward to see rebuttals or papers questioning or answering questions raised by Rigamonti; in coming year. Overall this appears to be paper that will be cited quiet a lot.