Tuesday, May 29, 2012

Variance and Width

1.Variance
  In last post, I have mentioned that my next step is to use variance to distinguish some errors in material recognition. While it turns out that it doesn't work.  For some specific materials, variance can be a good measurement, like metal. But in error detecting, variance need to be valid for all kinds of materials so that I can use it to modify the errors. But variance can't work for several materials, like the picture shown below.  I have sorted all the training and testing data's variance, and ind1 and ind2 show the sequence of the variance. The last 6 material's variances of their training and testing data can't be corresponded to each other.
2. Width
   When using width to help recognize the material, the error increase to 5 in 10 (original is 2 in 10). I think maybe because widths are not unique among materials.


  So I think I'm gonna stop in 2 errors in 10 materials, and start working on rectifying each pictures' deformation, which is the main problem of the new dataset.

Monday, May 28, 2012

Clustering using K-means algorithm

  This weekend I was trying to find a new way to cluster the materials. And I tried to use VLfeat. By looking into the algorithms of VLfeat, I found that k-means might be a good way to cluster it. So I just used the built-in kmeans function in matlab to recognize the materials. Below is the result I got.
  As in the original method, I divided each picture(13 pictures including 3 deformed pictures) into training data and testing data. And try to classify all 26 samples into 10 centers.
  This result shows the clustering results of the training and testing data:
This result shows the distance between  every center and every points.(Each line represents a center)


This result shows the differences of the clustering results of the training and testing data.

This time it gets confused when dealing with butter and denim. Again, I'm pretty sure that it can be solved by adding the factor of variance. And also the failure of recognizing deformed pictures reminds me that last time I just rectified the position of the peaks for every picture, I still need to rectify all the peaks within one picture(split the picture in small columns and rectify every column). So that deformed pictures can be rectified as regular pictures.
So above are my next two steps.


Wednesday, May 23, 2012

New Dataset and Some Experiment

  Last week I have been able to collect my new dataset with 10 kinds of materials (Some are already existed in original dataset, some are brand new,like bread and butter). For all the materials, we took at least 2 pictures with high and low exposures:

1. Paper 4 (include 2 deformed ones)
2. Silk 2
3. Denim 6 (include 2 deformed ones, and because the pictures took  from jeans, so they recommended me to take 2 more pictures in orthogonal direction)
4. Wood 2
5. Bread 2
6. Butter 2
7. Plastic 2
8. Fibre 2
9. Metal 2
10. Skin 2


Below are some materials' pictures:


1.Bread(low high)





2.Butter(low high)




3. Deformed Denim(low high)
4.Denim(low high)
5.Paper(low high)




6.Deformed paper(low high)


7.Fibre(low high)




 And also I have worked on my code to align the peak of the column to the center of the image, the result of the first data set (2 errors in 10):

 And I also used my code on the new data set(4 errors in 10):

 So I still need to figure out some ways to improve my code.

Monday, May 14, 2012

Complementing the Original Code

1. Revise the "confusing matrix" to confusion matrix
   I have studied the confusionmat function in Matlab and used it to better represent the result:
From the confusion matrix above we can see the wrong predition of material 5(based on the low exposure pictures) .


2. Multi-classification
  I have used variance to recognize metal, and it works well to modify the wrong predition above(based on the low exposure pictures) .




3. Try to use two exposures
  Last week, I thought I could use the low exposure pictures to extract from the high exposure pictures, thus getting the pictures without background. But now I find that the low exposure pictures are good enough to analyse this problem. They already do not have any background information. (As below)


 What's more important is that when the high exposure pictures get abstracted by the low exposure pictures, they are just losing the important information, like the pictures below. So it's not a good way to supress the ambient light.


But since the pictures with low exposure have much less information than the high exposure pictures , I'm still trying to figure out a way to combine those two together. (Getting the average value of the two pictures just get the high exposure pictures darker)

Any suggestions?
And also there is a severe problem of alignment. Many low exposure pictures are not aligned, I suspect the good result I get in part1,2 is resulted from their location differences. My next step is to do the alignment between pictures or even within pictures(like the picture shown below). I know there are several groups in class using alignment, so I think maybe it's a better choice to ask these groups first.

Any suggestions?


To sum up:
  • Questions: 1.How to combine two exposures?  2. How to align pictures?(Existing function?)
  • Next step: 1.Combination and Alignment  2.Next collection of dataset.

Monday, May 7, 2012

A related paper

  A week ago, the professor gave a related paper to me as an important reference, which has been published recently on PROCAMS 2012. The link is  http://www.cs.cmu.edu/~ILIM/publications/PDFs/MKSN-PROCAMS12.pdf.I have read it in detail, and found that this paper was built on a device more sophisticated than ours, which could control red, green and blue lasers at high frame rates (18kHz horizontally and 60Hz vertically), and thus could use a filter to easily block the unwanted ambient light. Although they use different device to complete different goals,  the method they are using can give me a lot of cues to conduct my experiment.
The structure of the paper is as follows:
  •  types
 "We discuss how the line-striping acts as a kind of “light-probe”, creating distinctive patterns of light scattered by different types of materials.
We investigate visual features that can be computed from these patterns and can reliably identify the dominant material characteristic of a scene, i.e. where most of the objects consist of either diffuse (wood), translucent (wax), reflective (metal) or transparent (glass) materials."
 
The types they are looking into:
  1. diffuse (wood)----------lambertian materials
  2. translucent (wax)-------------dispersive media and subsurface scatteringmaterials
  3. reflective (metal)--------------------reflective surfaces
  4. transparent (glass) materials---------------refractive surfaces
  •  goals
  1. The first is low-power and low-cost reconstruction of diffuse scenes under strong ambient lighting (e.g. direct sun-light).
  2. The other application of our sensor relates to the scene’s material properties.

  •  method we can use for reference
  1. Ambient light suppression
  "Lastly the background can be suppressed by taking an image with the projector on and one with the projector off.  It is not  actually necessary to shut the projector off; instead, we choose a different trigger delay which effectively moves the location of the projected line. In this way, one gets two images with the same back-ground but with different projected lines. Subtracting one from the other and keeping only the positive values gives us a single line-stripe."

 I can use the different exposures in different positions to subtract the background from the response.
   2. Fast per-frame analysis
"Fig. 6 shows a per-frame analysis of a scene with milky
water bottles and another with glass objects. Our method has five steps:
(1) For each column, find the maximum intensity pixel
(2) At this pixel, apply two filters (see figure inset),
(3) If filter 1’s response is greater than a threshold,it is glass
(4) Otherwise, if the response to the second filter
is greater than a second threshold, label as milk. If there is no labeling, then it is a diffuse material. In the figure, we have marked glass as red, milk as blue and diffuse as green.
The biggest errors are for clear glass when the camera sees
mostly the background. This is a fast classification, since
for each column the filters are only applied once."

I can use the similar method to classify glass and others.


 3.Full scan analysis
  • diffuse(plastic)
"First we point out that simple detection of a single, sharp peak in a scene point’s appearance profile
[13] strongly suggests lambertian/diffuse reflectance. If the
profile has no peak, then the projector is not illuminating this pixel and therefore it is in shadow. "


They identify the diffuse materia by looking at the number of intensity maxima, which may not be useful in our experiment.
  • scattering and subsurface scattering(wax)
"Figure 8. We take the power spectrum of the three dimensional Fourier transform of each scan video, and integrate the time frequency dimension. The resulting 2D matrix is mostly sparse. Low non-zero support gives an indication of scattering and subsurface scattering."

  There may not be spectrums in our dataset, but I may try variance to detect this feature.

  •  distinguishing between reflective or refractive surfaces(metal and glass)
  "We have empirically found that the number of intensity maxima in the appearance profile at each pixel can be very discriminative. An intuitive explanation is that since reflective caustics are caused by opaque objects, the number of observed caustics at each scene point is less than in a refractive material, where the camera can view through the material onto the diffuse background, allowing the observance of many more caustics."


Glass has more reflective caustics than metal, I can use this feature too.
 "In (c) we show the raw features obtained from a low-res histogram of gradients (HOG). The top three discriminative features (d) for metal and glass show promise, but we believe more data is needed before a
discriminative hyperplane can be learned."
Another way to discriminate metal and glass, but maybe can't be used in my experiment.
  • methods my experiment can use
  1. We need to do ambient light suppression
  2. Can use filters to detect main difference
  3. The number of intensity maxima can also be an important feature.