## What happens to Superpixels?

While reading papers regarding superpixels, these superpixels nicely bounding on the edges of the objects and you say! han! why not? it is natural to use the superpixel.

It’s only when you apply on the images that you end up with all sorts of problems, using one of the method you get so many varying-shape superpixels and in other where the superpixels don’t look much scary you get superpixel boundaries sometime not following the image boundaries.

What to change? in most of the methods the only parameter user-controlled is the number of superpixels. There are some that tell how much weight one should give to the edges, but still there is no stable formulation.

Problem that creates is how to compare the superpixels across the image? Shape and size do not appear to be so meaningful, edge-directions on the superpixel boundary could be one? But I could not find any proper method that can do comparison across the images. Any method should take into consideration that due to some calculation the superpixel that should have been one has been broken into two. Or one that was square has become little elongated on one side.

Then there is problem of defining the neighborhood, should even the one pixel boundary is the neighbor. How the neighborhood distance should be computed? should it be distance between the centers? Percentage of the boundary between the superpixels?

More and more algorithms are basing their calculations on the superpixels and they are dealing with these problems quite randomly. They calculate large number of features per superpixel, hoping somehow one of the features will negate the effect of discrepancies talked above. We need to develop more proper solutions with clear thinking of objectives.