The morning after the big win


The euphoria of win must have not left when Pakistan has opened eyes in the morning. The victory over the arc-hrival, victory which many of us wanted but some part of us was not expecting, should have left you exuberant, full of energy, hopeful.

Was there not a hint of disbelief that today day is just another day? How it could be? it ought to be Eid or National Day or even better an unexpected holiday because some big-wig passed away. There should have been an extra kick in your walk.

I have woke on many such mornings, sometimes morning after a victory or day after an exhilarating display of talent; Ah!  when Pakistan won the World Cup, that win in the Hockey Champions Trophy,  Saqlain’s momentous effort with Wasim, Shoaib bringing South Africa to knees in a single over, Jahangir Khan defeating Jahansher, Junoon….

But you should ask a question why? To be fair, it’s not the biggest win, just 4 matches? it cannot change future of the nation, cannot remove corrupt, not produce enough energy, cure cancer, …  reduce a long list of disappointments, but it’s a win leaving you with unexplainable joyous feeling. You must be feeling that wave of invincibility, that anything is in grasp, good can happen, we can win, we can crush and we can achieve.

Let not that belief of invincibility pass you. If you let it pass, it will be a just memory. The memory where 11 players did something wonderful and rest of us just joined into celebrations. Or you can ride the wave of this ego, bravado, and confidence. As they say, monetize it.

Hold on to the moment of confidence when Shadab asked for a review, that confidence, that air of faith in one’s actions. How many moments you will have that confidence when you are typing your results or being questioned about your experiments?

Hold on to the effort Fakhar put in when he was given lifeline, let it your effort when you have floated around your semesters without failing in last few years due to luck. Hard work, full of concentration, day after day, ball after ball, quiz after quiz,

Hold on to Aamir’s effort to put himself back in the arena after being caught in a match-fixing, humiliating his nation and perhaps himself too. His talent might be God given, but his effort is not, he was not born with the daily practice he puts in. You should ask the question, how many hours you have spent on your manuscript? or sweat in library solving those equations.  Understand the frustration Sarfaraz must have felt every time he has to tell his colleagues, please play with some responsibility; and still have not lost his nerve.

And hold on to Hardik Pandya’s effort for the lost cause, … not all the efforts get you victories, but they are important. (remember Wahab Riaz’s bowling in last world cup, it will be unfortunate that he might be remembered for that, but that bowling left everyone mesmerized)

Hold on to all or one of them and make it yours … otherwise,  few years down the road these will be just “what were you doing when…” memories.

Be a  dreamer, be humble … and show up every morning for a fight.

Advertisements

Finally solved the login loop for the Ubuntu 14.04 after NVIDIA


After numerous days of fighting and then frustratingly giving up, I was able to solve the login loop problem.  The problem started appearing when in quest of working on Deep Convolutional Network I started working on Caffe library. All was fine, until I updated something, disturbing the fine balance and causing me to be stuck to the Login Screen that kept popping back up even after my entering the password.

In the process instead of things becoming better, I ended up in the place where my GUI will not even load and I will be stuck to the terminal. I thought I could make it work if I changed from lightgdm to gdm. That change resulted in another headache, “blinking black screen with a cursor on the corner”. Pressing Ctrl-Alt-F1 took me to the terminal but it will keep blinking back to the blank screen, making it impossible to work on anything.  Following https://community.linuxmint.com/tutorial/view/842  and https://seravo.fi/2015/fixing-black-screen-after-login-in-ubuntu-14-04 I tried changing the commands in the grub by setting nomodeset. Although, the author claims it solved its problems but for it did not work. Recognizing that it must be that my gdm is gone bad,  

sudo dpkg-reconfigure gdm

and changed back to lightdm.

One problem solved, no more flickering screen and back to the first problem, stuck in the log-in loop 😦

Trying my luck I performed following

sudo nano /etc/X11/xorg.conf

It was empty, so stepped ahead and deleted it.

Then ran

sudo cop /etc/X11/xorg.conf.nvidia-xconfig-original /etc/X11/xorg.conf
sudo /etc/init.d/lightdm stop
sudo X -configure
sudo /etc/init.d/lightdm restart

And I was able to get past the login screen.

Now I was stuck into “Black Desktop Screen”, https://itsfoss.com/how-to-fix-no-unity-no-launcher-no-dash-in-ubuntu-12-10-quick-tip/ came to the help.

 

 

 

Graph Isomorphism is “just” Quasipolynomial


Well if Graph Isomorphism was a person you will feel for that person, its no more special. For long we have been saying its NP complete or not and now its no more 😦 If it was person I will say Sorry Mate!

Anyhow, its wonderful news in theoretical computer science but  just realized its going to take a long time to become practical in computer vision applications. However it does require long thinking.

TensorFlow: another Deep Learning library?


The week starts with interesting but predictable news. There is one more player there in Deep Learning. We had Caffe, Torch, Theano and now we have TensorFlow vying for our deep learning solutions.

I have questions, for example why its called Tensor Flow? might be because they are not presenting themselves as another Deep Learning library but as a library that can represent flow of Data in a graph. Those graph nodes could be computational or just data-pushers. That makes me think of them different from other deep learning libraries, atleast from Caffe.

Let me confess, I have only used Caffe. And I have found it really easy to use, however getting around setting up specific network with different form of learning looks really difficult thing to do. Torch, I heard lets you setup your own learning mechanism, however I have never worked with it. As per recent Tombone’s small survey more people are inclined to use Caffe, might be because its easy to use.

TensorFlow looks exciting, their tutorial section is also well curated. It has shine and smoothness of the any product made by the big corporate rather than roughness we see in most of academic based products. However how much its “open source” is really an open-source (easy to change, easy to update, etc…) and how much we can play with the code rather than just use it as API is going to define where it will be used. My feeling is that TensorFlow’s API will be in use pretty quickly however whether it will be used to re-imagine existing Deep Learning Paradigm is still a question.

How Google Photos finds trees?


When a friend posted on facebook that if he types alligator in his Google Photos search page, it can find all the photos of the alligators in his images, even the ones which are not tagged. I was intrigued …

If you want to learn how it does work go to last section of this blog. I have detailed the links, both for general reading and technical one. Let me just dazzle you with some results here.

So I searched for trees in my photos,

IMG_20141107_111758710_HDR IMG_20150103_122857248_HDRIMG_20141106_120345930_HDR

It was even able to find few looking like trees, for example following from Museum of Fine Arts exhibition

DSCN2797  Which to me made sense, one can look at the texture, shape and colors. Thus inferring that some image has a tree in it. They might have added some context too, if there is a horizon etc.. (I already knew that they are using CNN, this was me trying to make sense of if I did not knew that they were using CNN).

What really surprised me were results below. One on the left has no color, none of the tree like texture and very thin tree like structure. On right there is an image in which tree has been made blue (it’s art installment where trees were colored blue to represent veins carrying oxygen), so it cannot use the color cues.

IMG_20150103_125706516_HDR

Gainesville Blue Trees art installment

Gainesville Blue Trees art installment

That made me think might be they using little more than just image cues, they might be using similarity among images and see if there is any other similar image, that has been labeled, has been tagged by someone or has some caption or description. Which in this dan and age is fair and smart thing to do.

I remembered a very old paper, I think it was from UCF Computer Vision Lab (I think one of the students of Mubarak Shah), where they were trying to distinguish between the grass, shrubs, trees, … and it was not an easy task. So I experimented with it. Next thing I searched was “grass” and yes results were quite different. Although it was not as accurate, however it did made sense.

IMG_20141106_120345930_HDR DSCN2711 IMAG0046 (1)R2 IMG_20141108_174249930_HDR

Terms like “food” give the most worse result. However, more defined objects give much better result, for example it was able to find “cycle” that was not even the main part of the image, similarly results for car and airplane were good. For searches on terms “Chair” it did an interesting thing. It found people in sitting pose, most of these were people who were sitting on Sofa or on the ground, but it made an association of human pose with the concept of “chair”.

How Magic works? (aka how google can do these amazing things?)

If you want to have a Google’s view of how the ‘magic’ works have a look at this blog http://googleresearch.blogspot.com/2013/06/improving-photo-search-step-across.html. If you are looking to learn something out of it, have a look at Freebase, CNN, Dr. Hinton…. and that their final layer is just linear classifier. If you are interested in getting some technical know-how, have a look at Dr. Hinton’s paper ImageNet Classification with Deep Convolutional Neural Networks“.

However, we know things have moved at a quite fast pace, about a year ago there was an announcement by google that they can now provide “natural description of images” http://googleresearch.blogspot.com/2014/11/a-picture-is-worth-thousand-coherent.html.  Technical stuff to look for  RNN (recursive neural network), how they are using for machine translation, have a look their paper, etc….

latex error: .xbb (no BoundingBox) for pdf files


So I have been encountering this error a lot for the pdf files which were working fine on one machine but after being shifted to other, could not compile me tex file. After spending quite a bit of time in fixing from with in latex, a friend recommended should just fix the pdfs themselves

so do following

%first convert pdf to eps

pdftops -eps yourPDFName.pdf yourPDFName.eps

%convert to eps with bounding box 

epstool –copy –bbox yourPDFName.eps –output yourPDFNameNew.eps

%convert back to pdf

epstopdf yourPDFNameNew.eps

 

dipping toes in GPU programming


After trying many different ways to speedup poselets could not get below the 1 sec per frame. Best I could do was 6 sec per frame (choose the pyramid levels given in standard code, that is too many pyramid levels).

Therefore natural progression was lets go to the GPU. For going through lectures of “Introduction to Parallel Programming” on Udacity. Till now the lectures are going quickly.The tricky parts have not yet started but the instructor’s teaching style keeps you interested and involved.

Hopefully will be calculating HOG and evaluating convolution in next week. Interesting will be how to make it work on different scales keeping everything inside the GPU. Keeping fingers crossed for making to go below 1 sec.