Archive for the ‘ Current Status ’ Category

Vision and Deep Learning in 2012


This entry is an effort to collect important Deep Learning Papers that were published in 2012 especially related to computer vision.

There is general resource http://deeplearning.net/ but not a good resource that collects the papers in Deep Learning w.r.t to Computer Vision problems.

General Resources 

Interesting Papers 

Advertisements

Co-Segmentation in CVPR 2012


There is the list of Co-Segmentation papers in the CVPR 2012 {if you find someother interesting papers regarding Co-Segmentation please send message or post comment thanks}

  • “Multi-Class Cosegmentation” Armand Joulin, Francis Bach, Jean Ponce
  • “On Multiple Foreground Cosegmentation” Gunhee KIM, Eric P. Xing
  • Higher Level Segmentation: Detecting and Grouping of Invariant Repetitive Patterns” Yunliang Cai, George Baciu: not directly co-segmentation paper but could be seen in that way. 
  • “Random Walks based Multi-Image Segmentation: Quasiconvexity Results and GPU-based Solutions” Maxwell D. Collins, Jia Xu, Leo Grady, Vikas Singh
  • A Hierarchical Image Clustering Cosegmentation FrameworkEdward Kim, Hongsheng Li, Xiaolei Huang
  • Unsupervised Co-segmentation Through Region MatchingJose C. Rubio, Joan Serrat, Antonio López, Nikos Paragios

Some interesting papers to look into

  • Learning Image-Specific Parameters for Interactive Segmentation”  Zhanghui Kuang, Dirk Schnieders, Hao Zhou, Kwan-Yee K. Wong, Yizhou Yu, Bo Peng
  • “Graph Cuts Optimization for Multi-Limb Human Segmentation in Depth Maps” Antonio Hernández-Vela, Nadezhda Zlateva, Alexander Marinov, Miguel Reyes, Petia Radeva, Dimo Dimov, Sergio Escalera
    • {JUST want to read it to See How the Depth Data is Being Used}
  • “Active Learning for Semantic Segmentation with Expected Change”  Alexander Vezhnevets, Joachim M. Buhmann, Vittorio Ferrari
    • Basic Objective is to Learn about the “Active Learning” and how it is used
  • “Semantic Segmentation using Regions and Parts”  Pablo Arbeláez, Bharath Hariharan, Chunhui Gu, Saurabh Gupta, Lubomir Bourdev, Jitendra Malik
    • IT IS AN ORAL PAPER
  • “Affinity Learning via Self-diffusion for Image Segmentation and Clustering” Bo Wang, Zhuowen Tu
  • “Bag of Textons for Image Segmentation via Soft Clustering and Convex Shift”  Zhiding Yu, Ang Li, Oscar C. Au, Chunjing Xu
  • “Multiple Clustered Instance Learning for Histopathology Cancer Image Classification, Segmentation and Clustering” Yan Xu, Jun-Yan Zhu, Eric Chang, Zhuowen Tu
  • “Maximum Weight Cliques with Mutex Constraints for Video Object Segmentation”  Tianyang Ma, Longin Jan Latecki

getting into Deep Learning


Yes, the fever has reached me also 🙂 and I have decided to look into the deep learning. Some interesting papers if you want to have a look

    • Training Products of Experts by Minimizing Contrastive Divergence, Geoffrey Hinton
    • Deep Boltzmann Machine; Salakhutdinov, Hinton; Proceedings of the international conference on artificial intelligence and statistics, 2009. Knowing and understanding following concepts will help reading this paper
      • Boltzmann Machine and RBM (reading Product of Experts is highly recomended)
      • Annealed Importance Sampling (AIS) (Neal 2001) or have a look at “Importance Sampling: A Review” by Tokdar and Kass
      • Mean Field as used in Variational Inference. (Wikipedia page is quite helpful)

Reading is good, driving equations is better.

Another good read for beginners is

From Neural Networks to Deep Learning http://xrds.acm.org/article.cfm?aid=2000787

  • Very interesting point made by the Jef Hawkins (author of On Intelligence and founder of Numenta)

It requires a temporal memory that learns what follows what. It’s inherent in the brain. If a neural network has no concept of time, you will not capture a huge portion of what brains do. Most Deep Learning algorithms do not have a concept of time

Sparsity, Sparsity, Sparsity …… Some interesting papers


  1. Jacob, L., Obozinski, G., & Vert, J.-P. (2009). Group lasso with overlap and graph lasso. Proceedings of the 26th Annual International Conference on Machine Learning – ICML ’09, 1-8. New York, New York, USA: ACM Press. doi: 10.1145/1553374.1553431.
    1. http://videolectures.net/icml09_jacob_glog/
    2. www.machinelearning.org/archive/icml2009/papers/471.pdf
  2. D.M. Witten, R. Tibshirani, and T. Hastie, “A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis.,” Biostatistics (Oxford, England), vol. 10, Jul. 2009, pp. 515-34.
    1. www-stat.stanford.edu/~tibs/Correlate/pmd.pdf
  3. R. Jenatton, G. Obozinski, and F. Bach, “Structured Sparse Principal Component Analysis,” Journal of Machine Learning Research: W&CP, vol. 9, Sep. 2009, pp. 366-373.
    1. http://jmlr.csail.mit.edu/proceedings/papers/v9/jenatton10a/jenatton10a.pdf
  4. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, Feb. 2006, pp. 49-67.
    1. http://www2.isye.gatech.edu/~myuan/papers/glasso.final.pdf
  5. S. Mosci, D.- Universit, A. Verri, D.- Universit, and L. Rosasco, “A primal-dual algorithm for group sparse regularization with overlapping groups,” Advances in Neural Information Processing Systems, 2010.
    1. http://books.nips.cc/papers/files/nips23/NIPS2010_0776.pdf
  6. J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online Learning for Matrix Factorization and Sparse Coding,” Journal of Machine Learning Research, vol. 11, Aug. 2009, pp. 19-60.
    1. http://jmlr.csail.mit.edu/papers/volume11/mairal10a/mairal10a.pdf

Beta Process and Denoising


Studying “Dependent hierarchical beta process for image interpolation and denoising” published in JMLR 2011, by authors Mingyuan Zhou, Hongxia Yang, Guillermo Sapiro, David Dunson, Lawrence Carin. 

More information coming as I read the paper ………………

Current Status; Representing haar features in the integral image


About NBS, Dr. Jeff explained to me a basic point I have missed that is while representing the eigen vector as the linear combination of the haar features, we have to convert the haar-feature-basis vectors also to the integral domain.

For example let b1 be representing the haar feature having ones from (1,1) to (10,10). Then its integral representation would be a having

  • 1 at (1,1) & (10,10) and
  • -1 at (1,10) & (10, 1)

In this representation there would be zero every where except on these 4 locations.

Thus in this way the phi (NBS representation of eigen vector) will be summation of integral-represented basis vectors multiplied by the ci’s. Thus phi will also have zeros on many locations making is sparse. Thus when the image is projected over this phi the coefficient will be calculated by small number of additions.

Current Status


Just Trying to implement the code of this.

Trying to see how code works