Tuesday, February 26, 2013

Search one image

How to search an image inside other image?


Well an interesting thing of computer vision is the possibility of search an image inside other, this process could be used in many systems to search objects, or count (but the images should be to similar and the environment must be under a strict control)

First I took the image to search, in this case is the insignia of doc Mac Coy, just for this exercise






turbo

Standard


Using gray images directly


In this case the load is a little bit longer in the libjpeg-turbo than the standard one, and the process of find are similar

Here is the code (in the opencv/doc or opencv/samples) there are more examples

To use the function matchTemplate ( imgSrc, imgPattern, .....) both images must be in grayscale, so we can load directly in gray and avoid this step, and we earn 10 ms.

This function, fetch the area were the pattern is in the source.

bool fastMatch (const Mat& _source, const Mat& _pattern,Rect* rectROI, double coincidence)
{

Mat source;
Mat pattern;

Size sourceSize;
Size patternSize;
Size imgResultSize;

Point maxLoc, pointRectROI;
double maxVal;

bool found = false;
// we can avoid this step if we load the image directly in gray
    cvtColor(_source,source,CV_BGR2GRAY);
    cvtColor(_pattern,pattern,CV_BGR2GRAY);

//We need to take the size of the images.
    sourceSize = source.size();
    patternSize = pattern.size();

    imgResultSize.width = sourceSize.width - patternSize.width + 1;
    imgResultSize.height = sourceSize.height - patternSize.height + 1;
    Mat imgResult(imgResultSize,CV_32FC1);
//Function that found the image
    matchTemplate (source, pattern, imgResult,CV_TM_CCOEFF_NORMED);
    minMaxLoc(imgResult,NULL,&maxVal,NULL,&maxLoc);
    maxVal *=100;
    if (maxVal >= coincidence)
    {
        *rectROI = Rect(maxLoc.x, maxLoc.y, patternSize.width, patternSize.height);
        found=true;
    }
    return found;
}

Monday, February 25, 2013

libjpeg vs libjpeg-turbo

With the first exercise I've the opportunity to compare libjpeg vs libjpeg-turbo.

Before to install libjpeg-turbo I made a backup with the standard one, so it's a great moment to compare both

The code used is the same in both cases (Contours) and here are the results

libjpeg-turbo

libjpeg

libjpeg libjpeg-turbo
Load 199 63
2Gray 61 45
Trheshold 12 12
Create 32 11
find 53 31
Draw 47 47
Total 404 209

Finally the libjpeg-turbo is faster than libjpeg, we will check in the future what happen with Linaro vs Raspbian.

By the moment I'll be working with Raspbian with libjpe-turbo

Saturday, February 23, 2013

Contours in openCV

Detecting contours.


The process of detecting contours is one of the simples.

Load the image -> Change to gray --> use the threshold function --> find the contour --> draw the contours

The code

//Load the image in colour
        flag=(double) getTickCount();
        imagen=imread(argv[1],CV_LOAD_IMAGE_UNCHANGED);
        flagLoad=getTick(flag);
//Change the image to gray
        flag=(double)getTickCount();
        cvtColor(imagen,imgGris,CV_BGR2GRAY);
        flag2Gray=getTick(flag);
//use the threshold to separate few objects
        flag=(double)getTickCount();
        threshold(imgGris,imgContorno,122,255,THRESH_BINARY);
        flagThreshold=getTick(flag);
//we need a place to leave the new iamge
        flag=(double)getTickCount();
        Mat dst = Mat::zeros(imgGris.rows,imgGris.cols,CV_8UC3);
        flagCreate=getTick(flag);

        namedWindow("gris",CV_WINDOW_NORMAL);
        namedWindow("contorno",CV_WINDOW_NORMAL);
        namedWindow("binario",CV_WINDOW_NORMAL);

        imshow ("gris",imgGris);
        imshow ("contorno",imgContorno);
        cvMoveWindow("gris",300,50);
        cvMoveWindow("contorno",600,50);

vector< vector<Point> > vecContornos;
vector<Vec4i>jerarquia;
//Find the contours
        flag = (double)getTickCount();
        findContours(imgContorno,vecContornos,jerarquia,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
        flagFind =getTick(flag);
        flag = (double)getTickCount();
        for (int idx=0;idx >=0;idx=jerarquia[idx][0])
        {
                drawContours(dst,vecContornos,idx,WHITE,5,8,jerarquia);
        }

The results 

And finally we have this results from the original image


Original image


Result image
 All the values are in milliseconds 

This exercise give us a useful information

The main information is the threshold to separate the right image (12 ms) and find the contours (31 ms) around 50 ms to detect objects, it's not really bad for a dive which could use this information.

But the structural analysis could be take more time, we will what happen in the next exercises 

Any comment will be appreciated

Friday, February 22, 2013

Raspbian + OpenCV + libjpeg-turbo

A few time I made the installation of openCV over Raspbian, but after speak at the Cubieboard community there is a library called "libjpeg-turbo" wich is faster than the standard one

I  read a lot of information about it, and as far I know, the Linaro distribution use it, (I've to do a double check)

But at this moment I can't take off the raspbian and I've to install the library separately

I found one article that explain what to do
How to compile  the OpenCV 2.4.0 with libjpeg-turbo
To build OpenCV 2.4.0 with libjpeg-turbo you need:
  1. build libjpeg-turbo as static library
  2. configure OpenCV with the following command:
    cmake -DWITH_JPEG=ON -DBUILD_JPEG=OFF -DJPEG_INCLUDE_DIR=/path/to/libjepeg-turbo/include/ -DJPEG_LIBRARY=/path/to/libjpeg-turbo/lib/libjpeg.a /path/to/OpenCV

But I'm a little bit out of training and find some questions.
So what I've to do ? I need a cook book, and I didn't find so I have to make one. (please any mistake let me know)

1º Download the libjpeg-turbo

Master of libjpeg-turbo :https://github.com/aumuell/libjpeg-turbo/archive/master.zip

To make the installation ( follow the installation)
2º Prepare the installation

unzip libjpeg-turbo-master.zip 
cd {source_directory}
autoreconf -fiv 
(note: if the autoreconf doesn't exist add it "sudo apt-get install dh-autoreconf")
mkdir {build directory}
#cd {build_directory} sh {source_directory}/configure [additional configure flags}
../configure --enable-static
 2-Bº After made the configuration we need to make the library static. (How I can make a static library :| ) -->
"-fPIC" which was an abbreviation for Position Independent Code, and this had 
to be passed to create library code objects, without that flag, code that is specific to 
the source would be used, and then the library would fail.

The  command "../configure --enable-static" will create some files
We have to edit the “Makefile” 
Locate the line CC = gcc 
and change it by CC = gcc -fPIC
sudo make
sudo make install.
After all this steps we'll have installed the libjpeg-turbo. 3º Now How to link our OpenCV to libjpeg-turbo ? Its easy we have to create again our CMAKE configuration
#I removed the python compatibility 
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_EXAMPLES=ON 
-DWITH_JPEG=ON -DBUILD_JPEG=OFF 
-DJPEG_INCLUDE_DIR=/path/to/libjepeg-turbo/include/ 
-DJPEG_LIBRARY=/path/to/libjpeg-turbo/lib/libjpeg.a /path/to/OpenCV .. 
 

When the CMAKE comand is done we will see somthing like that

- Detected version of GNU GCC: 46 (406)
-- Found JPEG: /opt/libjpeg-turbo/lib/libjpeg.a  
-- Found Jasper: /usr/lib/arm-linux-gnueabihf/libjasper.so (found version "1.900.1") 
-- Found OpenEXR: /usr/lib/libIlmImf.so
-- Looking for linux/videodev.h
-- Looking for linux/videodev.h - not found
-- Looking for linux/videodev2.h
-- Looking for linux/videodev2.h - found
-- Looking for sys/videoio.h
-- Looking for sys/videoio.h - not found
-- Looking for libavformat/avformat.h
-- Looking for libavformat/avformat.h - found
-- Looking for ffmpeg/avformat.h
----------------
-- 
--   Media I/O: 
--     ZLib:                        /usr/lib/arm-linux-gnueabihf/libz.so (ver 1.2.7)
--     JPEG:                        /opt/libjpeg-turbo/lib/libjpeg.a (ver 80)
--     PNG:                         /usr/lib/arm-linux-gnueabihf/libpng.so (ver 1.2.49)
--     TIFF:                        /usr/lib/arm-linux-gnueabihf/libtiff.so (ver 42 - 4.0.2)
--     JPEG 2000:                   /usr/lib/arm-linux-gnueabihf/libjasper.so (ver 1.900.1)
--     OpenEXR:                     /usr/lib/libImath.so /usr/lib/libIlmImf.so /usr/lib/libIex.so /usr/lib/libHalf.so /usr/lib/libIlmThread.so (ver 1.6.1)


make
#we have to wait a little bit (you can stop and continue later)

sudo make install 
 

Monday, February 18, 2013

Repeat the exercises

Next exercises 

A few time ago I made some exercises to study  computer vision, and I have to repeat them.
I made it with my computer I3 and 6 GB of RAM, I have to recognize that the code were a little bit dirty (not very efficient)

But if I can show the expected results, and I will give a better idea what I want to do with the cubieboard.

.-Detect and follow

This exercise consist in to take a part of an image, the eye in this case, and detect and follow.


.- Contours.

One of the most important this is to detect different contours, center of object.


.- Blobs
Blobs are pixels continuous quite similar to be the same piece








Saturday, February 16, 2013

I need a backup

How to do a backup?

Well we have our micro SD working fine, with all the libraries that wee need

As could be the OpenCV library, but I have to try to modify the compilation that I made in Cubieboard + openCV, and we have a small risk to make some mistake and the process to install should to start.

To avoid this problem, I am going to do a backup of the micro SD, it is quite simple in Linux. I don't know Windows, but if some can do it please leave a comment.

.- Take out the micro SD and with an USB adapter connect to the computer

ikaro@nirvana ~ $ dmesg
[ 2298.045043] sd 3:0:0:0: [sdc] 15548416 512-byte logical blocks: (7.96 GB/7.41 GiB)
[ 2298.050815]  sdc: sdc1 sdc2
[ 2298.053659] sd 3:0:0:0: [sdc] Attached SCSI removable disk

The card has two partitios

ikaro@nirvana ~ $sudo fdisk /dev/sdc

Disk /dev/sdc: 7960 MB, 7960788992 bytes
245 heads, 62 sectors/track, 1023 cylinders, total 15548416 sectors
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048      131071       64512    e  W95 FAT16 (LBA)
/dev/sdc2          131072    15548415     7708672   83  Linux

So we want to duplicate the micro SD

With all this information we can continue to clone / duplicate / backup

ikaro@nirvana ~ $ dd bs=1M if=/dev/sdc of=raspbery_Cubie.img

I have tried with bs=4M but the backup did not work.
This command take a few minutes.

But finally will have our backup from the micro SD

ikaro@nirvana ~/backup_cubieboard $ ls -lh
total 7,5G
-rw-r--r-- 1 ikaro ikaro 7,5G 2013-02-16 10:40 raspbery_Cubie.img.

Now we have to put this image into a new micro SD, at last of the same size and we can put the image into it

ikaro@nirvana ~ $ dd bs=1M  if=raspbery_Cubie.img of=/dev/sdc

And we will have the backup done.

I know it is a process simple, but it very useful

And if you want to make a NAND Flash backup it quite easy too, from raspbian.

 dd if=/dev/nand of=/some/place/with/enough/space

Friday, February 15, 2013

Spanish to English

¿Por qué cambio a ingles ?

Bueno, parece que en el futuro cercano me quedaré en paro, y el futuro está fuera de España, ya sea con los clientes en el extranjero o trabajando en una compañia.

Así que decidí practicar todo el inglés que pueda, y el mejor método es el de inmersión lingüística, así que trataré de hacer todo lo que pueda en inglés

Normalmente, en la tecnología el inglés es el vehículo de comunicación, estudio y desarrollo.
El inglés que voy a utilizar será sencillo de seguir ya que serán más que nada pruebas y resultados y prácticamente se dice casi siempre en inglés.

De todas formas seguiré respondiendo en español a quien me pregunte en español

Larga vida a la "ñ"

Why I am changing to English ?


Well in the near future seem I will be borke and I know that the future it is outside of Spain, could be as clients, or my self working in a company.

So I know that I have to practice all the English as I can, and the better method is linguistic immersion, I will try to do all most in English.

Normally in technologies the English is the vehicle of communication, studies , and develop.

My English will be quite simple and plane, cause it will be based in probe and results, and in most of cases they name came from the English.

Mean while, I will try to answer the English questions in English.
But please forgive me my mistakes.

Long live to the "ñ"

Sunday, February 10, 2013

Load an Image


First step with OpenCV (corrected)

Load an image, Lena is here.

I made this exercise a few weeks ago, but I had a mistake, and the obtained data were not correct, the first test gave me around 140 milliseconds to load and display an image.

To much time if we think that in one second of video, we could have till 30 images per second or even more; one image each 33 miliseconds.

Note: This comparative it is not real, the video has a different compression such as I-Frames (real images) and P-frames( predictive frames)


There are a more things to do as looking for libjpeg-turbo, change the Raspbian to Linaro

But the first thing to do is to correct the code, and separate each time in their different process: Load the image, display the image

This is the new code.

#include <cv.h>
#include <highgui.h>
#include "../00_include/tools.h"

using namespace cv;

int main (int argc, char** argv)
{
Mat imagen;
double flag,flagCarga,flagDisplay, tiempo;
char resultadoCarga[25];
char resultadoDisplay[25];
char resultado[25];
Size imgSize;
//flag of time
        flag =(double)getTickCount();
        imagen=imread(argv[1],CV_LOAD_IMAGE_UNCHANGED);
//calculate the time
        sprintf(resultadoCarga,"Load %2.f",getTick(flag));
        namedWindow("FOTO",CV_WINDOW_AUTOSIZE);
//Fetch the size
        imgSize = imagen.size();
        sprintf(resultado,"Size width=%d height=%d",imgSize.width,imgSize.height);
//New flag of time
        flag = (double)getTickCount();
        imshow("FOTO",imagen);
//calculate the time
        sprintf(resultadoDisplay,"Display %2.f",getTick(flag));
        printf("%s\n",resultadoDisplay);
//put the data on the image
        putText(imagen,resultadoCarga, Point(10,20),FONT_HERSHEY_SIMPLEX,0.5,BLUE,1);
           putText(imagen,resultadoDisplay,Point(10,35),FONT_HERSHEY_SIMPLEX,0.5,BLUE,1);
        putText(imagen,resultado,Point(10,50),FONT_HERSHEY_SIMPLEX,0.5,BLUE,1);
//Save the image
        imwrite("out.jpg",imagen);
        waitKey();
}


Well, with the correct software we have this values on the photo.


The values are:
.- Load 41 milliseconds
.- Display: 5 milliseconds

Well this values are no too bad, but it is not computer vision, we did not anything with the image as could be check the blobs, detect a face, and eye, some color, detect objects, etc.

Aprender sin reflexionar es malgastar la energía. Confucio (551 AC-478 AC).
Learning without thinking is labor lost. Confucius (551BC - 478 BC)

We can see that we have 41 miliseconds to charge an image, so we would check with libjpeg-turbo, and we will see if the load time get reduced, try to use and SATA HD.
 The display time was 5 milliseconds, if its computer vision, we do not need see the image, just the cubieboard has to "see" it and process. 

This exercise does not give us to much information cause is not related with computer vision.
I will have better information wen have time to make the exercises about computer vision as detect contours, geometry, detect faces, eyes, blobs (contiguous pixels with similar color), etc.

But one important thing is to try Linaro, but I can not do it at this moment, I have to wait a couple of weeks


There is a very interesting information that I learned at the Cubieboard community


getTick(flag)

 double getTick(double flag)
{
/*
This functions return the time in milliseconds
since the "flag" moment till now
*/ 
//Get the frequency  
double frecuencia = getTickFrequency() / 1000 ;
double t = (double)getTickCount();
return((double)t - flag)/frecuencia;
}


 



Wednesday, February 6, 2013

Siguiente paso: visión artificial.

Visión artificial 

Bueno llegados a este punto vamos a entrar en faena y vamos a darle un poco más de vida a la cubieboard.

 Cámara web

¿Cámara cara con muchos megapixeles o barata con pocos  megapixeles?

Queremos hacer que nuestro "sistema" pueda hacer algún tipo de reconocimiento y trabajo con imágenes, hasta ahí todo bien.

¿Pero como funciona una imagen en un ordenador ?
Una imagen no es más que una matriz de X*Y =megapixels
A mayor resolución mayor información pero información a nivel de visión redundante.
El punto menos fuerte de la cubieboard es la velocidad de procesamiento, así que trataremos de usar el menor número de pixels posibles, la imagen para nosotros puede perder calidad, pero para la cubieboard no le restará efectividad y si que permitirá mejor rendimiento.

Esta es mi cámara web

pi@raspberrypi ~ $ lsusb                                                                                                                                                          
Bus 003 Device 002: ID 046d:0819 Logitech, Inc. Webcam  C210                                                                                                            



¿Qué vamos a hacer en visión artificial ?

Esto no pretende ser un curso de visión artificial, sino una serie de ejercicios prácticos, donde se vean los ejemplos y como se comporta la cubie. Aunque si hay  dudas entre todos podemos tratar de resolverlas.

Queremos visión artificial en tiempo real, luego vamos a trabajar con imágenes de vídeo y no imágenes estáticas.

Este es el plan, que puede cambiar y si alguien tiene alguna duda, comentario, o aporte que se sienta libre de hacerlo
  1. Adquisición de imágenes
    1. Foto
    2. Vídeo
  2. Detección de contornos
  3. Geometrías
  4. Región de interés / (ROI)
  5. Detección de imágenes.
  6. Realidad Aumentada
  7. Blobs
El plan llevará cierto tiempo completarlo, y trataré de hacerlo en los dos próximos meses (si las complicaciones de la vida lo permiten)

Tuesday, February 5, 2013

Soy nómada

Soy nómada

¿Cómo puedo trabajar con la cubieboard?

Me he quedado sin un hogar fijo, soy nómada y vivo de la amistad, el viento y la wifi que me dejan mis amigos.

Solo tengo espacio para un portátil, no tengo teclado, ni ratón para la cubie, ¿y ahora cómo trabajo?.

Fácil pues por SSH me conecto.
Antes de quedarme sin monitor ni teclado, programé una IP fija en la cubie por el puerto ethernet.
Y con un cable cruzado ya nos podemos conectar.
La cubieboard saldrá a internet por la wifi, pero por ahora no la necesito.

ikaro@nirvana ~ $ ssh -X pi@192.168.0.50
pi@192.168.0.50's password:
Linux raspberrypi 3.4.19-a10-aufs+ #4 PREEMPT Fri Dec 28 23:40:52 CET 2012 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Jan 24 19:25:17 2013 from 192.168.0.20
pi@raspberrypi ~ $

ssh -X usuario@ip es para poder ejecutar aplicaciones gráficas.

Y tachan problema solucionado y ya volvemos a estar activos.

Ya adelanto que probé la webcam, y traté de hacer una detección de cara :)
y FUNCIONA pero tarda 3 segundos en detectar una cara, pero eso será en el siguiente post.