Monday, June 17, 2013

Still more about serial connection

Serial connection at the cubieboard side

As we see at the previous post (More about serial connection) I've done the serial part at the arduino's side, but what happen at the cubieboard side.

At the cubieboard side we have to make a program quite similar, which should open the serial connection, and take care about send and receive the data.

This is the simplest version that I made, for sure it has mistakes, and it could be made better, but is just for a demo proposal

There are a lot of documentation in internet better than mine.
The pseudocode should be somthing like this.

Begin
 open port (/dev/ttyXXX)
 configure port for arduino
 bifurcate (fork())
 if (son) read until exit
 if (father) write until exit
end

And with this I've to improve more about my own protocol.
I don't like to put all the code but here it's (with out any warranty :))
 


///BEGIN COMUNICADOR.cpp

#include <sys/stat.h>
#include <fcntl.h>
#include <strings.h>
#include <string.h>
#include <termios.h>
#include <unistd.h>
#include <iostream>
#include <stdio.h>


#define MAXBUFFER 255

using namespace std;

int configuraPuerto(struct termios *newtio);//configure the port to communicate with arduinos

int main (int argc, char **argv)
{
    int fd;//descriptor del fichero
    struct termios oldtio, newtio;
    int pid ; //para bifurcar el proceso
    bool STOP=false;

    char buffer[MAXBUFFER]; //tamano maximo para un buffer serie
   
    if (argc !=2)//comprobar que recibimos los parametros necesarios
    {
        cout << "Uso: " << argv[0] << " /dev/ttyXXX" << endl;       
        return -1;
    }
   
    //Abrir el puerto
    fd = open(argv[1], O_RDWR | O_NOCTTY | O_NDELAY);
    if (fd <0)
    {
        perror(argv[1]);
        return -2;
    }
    cout << "Abriendo puerto serie " << argv[1] << endl;   
    tcgetattr(fd,&oldtio);
    bzero(&newtio,sizeof(newtio));
    configuraPuerto(&newtio);
    tcflush(fd,TCIFLUSH);
    if (tcsetattr(fd,TCSANOW,&newtio) <0)
    {
        perror ("No se han podido establecer los atributos del puerto \n");
        return -3;
    }
    cout << "Puerto serie configurado para arduino" << endl;
   
    pid = fork();
    if (pid < 0)
    {
        perror("Error en fork()");
        return -4;
    }
    cout << "Bifurcacion creada" << endl;
    if (pid ==0) //proceso hijo
    {
        int res=0;//bytes devueltos en la lectura
        cout << "Proceso hijo (lectura)" << endl;
        //Bucle de lectura
        while (!STOP)
        {   
            res = read (fd, buffer,MAXBUFFER);
            buffer[res]=0;//Terminacion de la cadena recibida
            if (res >0)
            {
                cout << buffer << endl;
                if (buffer[res -1]==';') STOP = true;
            }
        }
        //saliendo del hijo
       cout << "Saliendo del proceso hijo (lectura)" << endl;
    }
    if (pid >0)//proceso padre
    {
        cout << "Proceso padre (escritura)" << endl;
        while(!STOP)
        {

            strcpy(buffer,"HOLA;");
            write(fd,buffer,6);
            usleep(5000000)        
        }
        //proceso de escritura
    //restablecemos los parametros del puerto
    tcsetattr(fd,TCSANOW,&oldtio);
    close(fd);
    }
    return 0;
}


int configuraPuerto(struct termios *newtio)
{
    //Establecemos los parametros fijos para la comunicacion con Arduino
    //9600 bps, 8N1, Sin control de flujo
    cfsetispeed(newtio,B9600);
    cfsetospeed(newtio,B9600);
    //8N1
    newtio->c_cflag &= ~PARENB;
    newtio->c_cflag &= ~CSTOPB;
    newtio->c_cflag &= ~CSIZE;
    newtio->c_cflag |= CS8;
    //sin control del flujo
    newtio->c_cflag &= ~CRTSCTS;
    //ignoramos las lineas de control
    newtio->c_cflag |= CLOCAL | CREAD;
   
    newtio->c_iflag &= ~(IXON | IXOFF | IXANY);
    //hacemos la comunicacion en modo raw
    newtio->c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG);
    //establecemos los tiempos de comunicacion
    newtio->c_cc[VMIN]=0;
    newtio->c_cc[VTIME]=20;
    return 1;
}

///END COMUNICADOR.cpp

More about serial connection

The importance of communication

Our cubieboard have enough connection to control almost of our sensors, using I2C.
But in my case I'll have a few more analogical sensors, and I want to avoid that, and the cubieboard doesn't have to work with the simplest things.

For this, my cubieboard will be a brain, the arduino will be then the senses, and the connection will be made over a serial connection
I could make using I2C but the arduino is working with it as master for the sensors, and could get more complicated. (by the moment :) )

The arduino part.(Serial vs SoftwareSerial)

Arduino has a serial port inside of it, you can access through the USB, or the digital pins (Rx-0,Tx-1).
This option gave me two problems.

1º The USB - Serial driver (ftdi_sio) does not exit in my version of raspbian (Linux raspberrypi 3.4.24-a10-aufs+), and it's necessary to upgrade (I made it, but also it's necessary to install again the OpenCV, lib-jpeg-turbo, etc. This option is completely necessary if you don't want to work with arduinos sketches (to slow :( at the cubieboard )


2º The other issue is more a problem of design. I'm learning who to make drones, and if I want to debug the arduinos part and don't worry about the cubieboard, I should to make through the USB serial, but if it's used by the cubieboard, I could'nt have access to it.

Arduino give us a solution, and it's to use a software serial.
It's possible to use two different digital pin to serial communication.
Here it's an useful example (http://www.arduino.cc/en/Reference/SoftwareSerialExample)

Here it's the schema of connection, (quite simple)
The serial connection send each byte alone, is oriented to a character, so we have to work in our protocol.
Here it's a code it's a modification of the arduino example.
///BEGIN .ino
#include <SoftwareSerial.h>

//Define for the new pins for the Software-serial
#define rxPin 2
#define txPin 3


String buffer="";// In this buffer I'll receive the commands

// Create a new serial port
SoftwareSerial miSerie =  SoftwareSerial(rxPin, txPin);
void setup() 
{
  Serial.begin(9600);
  Serial.println("USB SERIAL UP");
 
  // Stablish the serial port
  pinMode(rxPin,INPUT);
  pinMode(txPin,OUTPUT);
  miSerie.begin(9600);
  miSerie.println("SOFT SERIAL UP");
}

void loop()
{
//isBufferComplete is to retrieve complet commands
  if (isBufferComplete())//this function read the date from my serial (softserial)
  {
    Serial.print("Recibido:");
    Serial.println(buffer);
    buffer="";
  }
  //From the arduino to the cubieboard (throught SoftSerial)
  if (Serial.available())//Just read from the USB serial
    miSerie.write(Serial.read());
}

//This function read from the softserial
boolean isBufferComplete()
{
  //In this function we recieve each caracter till
  // arrive ";" wich means end of command
  boolean completa =false;
  while (miSerie.available())
  {
    char inChar = (char)miSerie.read();
    buffer += inChar;
    if (inChar ==';') completa=true;
  }
  return completa;
}
///END .ino
note: SerialEvent from the arduino, doesn't works with software-serial
http://arduino.cc/en/Tutorial/SerialEvent

At the next post will be the code for the cubieboard.


Sunday, April 7, 2013

And now what ???

After some months of studies, the blog need to improve.

So I'm migrating to wordpress, and my first project will be start on it.

http://doingDrones.wordpress.com

In this new blog I'm going to improve and learn all the knowledge needed to build a drone.

I'll try to do all the steps from zero.

By the moment it's the most simple design, a terrestrial vehicle with wheels  (it could have legs, or jump , crawl, so it's no so trivial )

The idea of this vehicle it's to be a base of test  and to improve, so it will be as simple as possible.

I'll try to use almost information I've searched on internet about similar things, and modify to accommodate to my project, like engines, batteries, gear, etc.


But for start, I'm going to design my own vehicle

 (arduino + cubieboard ) ^imagination = everything you can do




I hope to see you in my new blog, I'm doing this for the community to share and improve our knowledge.

Thanks for all and I'll see you at http://doingDrones.wordPress.com
And I'm thinking about this other board Udoo
http://www.udoo.org/


Tuesday, April 2, 2013

Drive a DC motor (send data to the arduino)

Exercise 3º Drive a DC motor

The simplest and not for a real use.

Till now, I was communicating from the arduino to the cubieboard, this is quite important, but now, I've to send the information from the cubieboard to the arduino.

For example, I send the amount of milliseconds that the dc-motor must be running.

The circuit is on of the most simplest
You can find all the information here (very good tutorials)
http://www.jeremyblum.com/2011/01/31/arduino-tutorial-5-motors-and-transistors/

I took this schema

http://www.jeremyblum.com/2011/01/31/arduino-tutorial-5-motors-and-transistors/

and here is the circuit, I had to change some values about resistor, transistor, etc. Cause I've to reutilizes some old components

By the moment this circuits just is used to show how we can send information from the cubieboard to the arduino, in the real world I'll make this circuits with Half-H drivers (L293 or similar) like this

And here the code at the arduino, to send the data from cubieboard I used the command
cu -s 115200 -l /dev/ttyS0

Obvious it can be do better, but just is to show the idea.

Wednesday, March 27, 2013

Cubieboard + arduino (thermometer I²C DS1621)

Exercise 2
Temperature sensor.

Now let me introduce to the integrated circuit DS1621, which is a thermometer with a communication system based on a I²C (Integrated-integrated communication)

Our cubieboard also has I²C connection, but I'm still working on a system to send the information from the senses (arduino) to the brain (cubieboard)

This is the video about the construction of all circuit.
I began, doing the circuit to work with the arduino, and checked.
I send the information to the serial, and show it on the screen

After that I connected the serial cable between the arduino y and cubieboard (the cable is an old cd-cable from my old computer)

Here is the video




and here is the result at the cubieboard


There are a lot of information about how to program the arduinos and I²C


I've changed the library "Protocol", cause it's not really necessary a port, cause in the case of I2C its possible to add few of integrated circuirt at the same ports, which will use the same channel of communications


a little bit of code
[CODE TO READ THE DS1621]

#include <Wire.h>
#include <protocol.h>

#define DEV_ID 0x90 >> 1


Protocol protocolTemperature("Temperature");

void setup()
{
  Serial.begin(9600);//I'm using 9600 bps to comunicate the arduino and the cubieboard
  Wire.begin();
  Wire.beginTransmission(DEV_ID); //connect to DS1621
  Wire.write(0xAC); //Access config
  Wire.write(0x02);
  Wire.beginTransmission(DEV_ID); //restart the DS1621
  Wire.write(0xEE); // start conversion
//read the documentation its a very interesting temperature sensor
}

void loop()
{
 protocolTemperature.send(readTemperature());
}
// This will go to another library
//I love the libraries :)
float readTemperature()
{
int8_t firstByte;
int8_t secondByte;
float temp = 0;
 delay(1000); // give time for measurement
 Wire.beginTransmission(DEV_ID);
Wire.write(0xAA); // read temperature command
Wire.endTransmission();
Wire.requestFrom(DEV_ID, 2);    // request two bytes from DS1621 (0.5 deg. resolution)

firstByte = Wire.read();    // get first byte
secondByte = Wire.read();    // get second byte

temp = firstByte;

if (secondByte)    // if there is a 0.5 deg difference
temp += 0.5;
return (temp);
}


[PROTOCOL.cpp]

void Protocol::send(float value)
{
    char buffer[16];
//NOTE: HOW TO CONVERT A FLOAT TO STRING IN ARDUINO
    dtostrf(value,5,2,buffer);
    Serial.println(_id+":"+buffer);
}

[PROTOCOL.h]
//This class will be growing
class Protocol
{
    public:
        Protocol (String id);
        String getId();
        void send(String msg);
        void send(float value);

    private:
        String _id;
};

The circuit to use I²C is quite simple









Sunday, March 24, 2013

Cubieboard + arduino + protocol

Light Sensor

This is another step in the communication between an arduino and a cubieboard.
By te moment I'm only using ttyS0 at the cubieboard, but for more complicated develops should be necessary to use the GPIO

The exercises consist in to detect a value and send it to the cubieboard, sound simple,itsn't ?

Well the circuit is quite simple, its an LDR (light dependent resistor ) connected to an analog connection at the arduino.

With the value of it we change the delay of blink, and with less light the led blink faster.


This is the basic circuit which is a modification from the project 14 (Light sensor from http://math.hws.edu/vaughn/cpsc/226/docs/askmanual.pdf)



Light sensor + serial connection
Well the circuit is just to see how to send the information from the arduino to our cubieboard, and how to interpret the data.

Now it's possible to receive the data and manipulate

There are a little bit of programming at the cubieboard to open the serial connection (ttyS0) ill try to summarize and it could be a bit tedious and

//I searched the code from google, there are a lot of information
//But is there any error or doubt don't hesitate to comment

[CODE FROM CUBIEBOARD]
//Open the serial port TTYS0 = /dev/ttyS0


    fd = open (TTYS0, O_RDWR | O_NOCTTY);
    if (fd < 0) { perror(TTYS0); return (1);}
    tcgetattr(fd,&oldtio);
    memset(&newtio,0, sizeof(struct termios));

//It was really necessary to change the parameters 
//to accommodate the serial from arduino to the serial of cubieboard
// More information here http://www.easysw.com/~mike/serial/serial.html

    newtio.c_cflag = BAUDRATE | CS8 | CLOCAL | CREAD;
    newtio.c_iflag = IGNPAR | IGNBRK | IMAXBEL ;
    newtio.c_cc[VTIME]=8;
    tcflush (fd,TCIFLUSH);
    tcsetattr(fd,TCSANOW,&newtio);

//READ
    while (STOP == FALSE)
    {
        res =read (fd,buf,255);
//Eliminate the "intro" of the data
        buf[res-1]=0;
        printf("%s\n",buf);
        if (buf[0]=='z') STOP =TRUE;
    }
    tcsetattr(fd,TCSANOW,&oldtio);
    close(fd);
===================================

For the arduino code I'm making a library, so it will be easy to improve, correct, and share.

[CODE FROM ARDUINO]
//LIBRARY
Protocol::Protocol(int pin)
{
    _pin=pin;
}


void Protocol::send(String msg)
{
    Serial.print(_id +":");
    Serial.println(msg);
}

void Protocol::setId(String id)
{
    _id = id;
}  
//CODE
 #include <protocol.h>

Protocol protocolLight(ledPin);

void setup()
{
  Serial.begin(9600);
  pinMode(ledPin,OUTPUT);
  protocolLight.setId("light");

}

void loop()
{
  lightVal = 1023 - analogRead(ldrPin);
  protocolLight.send(String(lightVal));
  digitalWrite(ledPin,HIGH);



Wednesday, March 20, 2013

Communication test

Exercise 0: Communication test.

Explain.

A serial of leds, will switch on and switch off continuously, in both directions, the delay between switch can be modified using two buttons.
An the delay value will send the cubieboard and displayed.


Code at the arduino.
.....
void setup()
{

//Initiate the serial communications at the arduino
  Serial.begin(115200);

//Digital pins for output
  for (int x=0; x<5;x++)
  {
    pinMode(ledPin[x],OUTPUT);
  }

// Digital pins for Input
  pinMode(btnUP,INPUT); 
  pinMode(btnDOWN,INPUT);
  changeTime =millis();
}

// Main program in a continuous loop
void loop()
{
  if ((millis() -changeTime) > ledDelay)
  {
    changeLED();
    changeTime= millis();
  }
  if (digitalRead(btnUP))
  {

//Serial print will be part of a function to use a protocol
    Serial.println(ledDelay++);
    delay(20);
        if (ledDelay >200) ledDelay =200;
  }

...........

void changeLED()
{
  //switch off all the leds
  for (int x =0; x<5;x++)
  {
    digitalWrite(ledPin[x],LOW);
  }
  digitalWrite(ledPin[currentLed],HIGH);
  currentLed += direction;
  if (currentLed == 4) {direction = -1;}
  if (currentLed == 0) {direction = 1;} 
}


Note: In the "Serial.println(value) will be used to make a function to control the protocol between arduino and cubieboard

To see if the cubieboard can receive the data I use the information that I find here : http://linux-sunxi.org/Cubieboard/TTL

stty -F /dev/ttyS0 -crtscts  
cu -s 115200 -l /dev/ttyS0
 

I know that the circuit is very simple, and even one of the worst I've ever made, but is just to see the idea.

If any one has any idea, or doubt, please share, I made this to learn and share. 

cubieboard + arduino

A brain with senses

Cubieboard its a great system to manage software and even some hardware it has a lot of connections

But If you want an extra power to works in real time such is needed in robotics, drones, 3d Printer, work with sensors, etc, we need an extra.

And arduino give us this power.

Arduino can sense the environment by receiving inputs, and do actions over lights, motors, servos, and other kind of actuators.

Supports analog connections , digital connections.
Low power consume
Exists a lot of shields to work with (list of shields)
A lot of libraries ready to work.
Very easy to program and develop

The idea is to work with a cubieboard for software , arduino to control the hardware and communicate them using serial port


This is the idea one cubieboard - serial cable - arduino
The brain need to know what happens is his legs.

I'm not going to publish to fast as I did at the computer vision exercises, the next exercises will have a part of hardware with arduino and his programming, and software with cubieboard.

I'm still thinking about the communication between them, I'm going to start using serial communication and a I'm going to do my own protocol keep it as simple as possible

I also have to remember how to work with arduinos, it will take a little bit of time

All the exercises are made following this manual 
http://www.EarthshineDesign.co.uk (but this link doesn't exist :S)
so use this one http://math.hws.edu/vaughn/cpsc/226/docs/askmanual.pdf

The next projects will be
0º Communication test
1º Light sensor
2º Temperature sensor
3º Drive a DC motor


For all this projects I don't need the cubieboard, but I'm going to use it to develop the protocol of communications (cubieboard and arduino)

5º Use computer vision to follow a face (for example)

This projects can change meanwhile I improve my knowledge.
Please if any one has a comment, or other idea, will be marvelous to share, and let your comment.

Sunday, March 17, 2013

Linaro + opencv + exercise

Exercises with Ubuntu-Linaro

1º Load an image

2º Contours

libjpeg libjpeg-turbo Linaro
Load 199 63126
2Gray 61 4557
Trheshold 12 12 12
Create 32 11 10
find 53 3171
Draw 47 47 87
Total 404 209363



 3º Search one pattern

libjpeg-turbo Linaro
Load Source 11 6
Load Pattern 11 6
Search 488 487
Total 510 499

4º Haar-Features

The time is quite similar

Conclusion 

The ubuntu-linaro is a little bit more fast, but it'snt a big difference.

I prefer to use raspbian cause it has a big community, and the information it's easy to find, and may be it has more things that I need, but I feel comfortable  in raspbian than in ubuntu-linaro.

But I know that I'm not taking all the power of the cubieboard, I've in mind to improve the compilation using cross compilation, and take advantage of the NEON acceleration, but I'll do latter when I've more knowledge 

Linaro + openCV

Well at the beginning of the blog some one, tell me to use ubuntu-linaro, wich is more focused to ARM processors

The Linaro use the library libjpeg-turbo directly and not is necessary to install as I do in Raspbian + opencv + libjpeg-turbo

The installation of Ubuntu-Linaro is quite simple, and can be downloaded from the berryboot, in the same way as I did with Raspbian.


Installing OpenCV

Well I find some troubles when I tried to install opencv in Linaro.

When I added the requisists to install openCV

sudo apt-get -y install build-essential cmake pkg-config libpng12-0 libpng12-dev libpng++-dev libpng3 libpnglite-dev zlib1g-dbg zlib1g zlib1g-dev pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools


sudo apt-get -y install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-progs ffmpeg libavcodec-dev libavcodec53 libavformat53 libavformat-dev libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev libxine1-ffmpeg libxine-dev libxine1-bin libunicap2 libunicap2-dev libdc1394-22-dev libdc1394-22 libdc1394-utils swig libv4l-0 libv4l-dev

I had to add this library, if not our programs will give us errors, but there are a lot of information in google to solve the problems.

sudo apt-get install libgtk2.0-dev 

cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_EXAMPLES=ON ..

and after that continue the installation of openCV

Friday, March 15, 2013

It's any one there ?

Well there are a couple of possibilities to check if there are any object, person

But one of the simplest is to use Haar-like features.

To work with Haar features (see the example FaceDetect.cpp in your directory OpenCV-2.4.3/samples/c/facedetect.cpp)
But basically works with haar files which store an abstract of the information about what is a face (any object) and what is not a face (a object)

Depending of the necessity and the power of our machine, we should different haar libraries. Even we can build special libraries for our purposes. (but my i3 with 4 GB ram took 3 days to make one haar library about cars)

I used this three
  

"./haarcascades/haarcascade_frontalface_alt_tree.xml" (3.5 MB) (over 500ms)
"./haarcascades/haarcascade_frontalface_alt2.xml" (0.8 MB) (over 300 ms)
"./haarcascades/haarcascade_eye.xml" (0.4 MB) (over 200 ms)


Also depends the size of the image, I used the less quality (160x120) for the first ideas will works

The code is based in the sample, so you'll find the code there, any doubt ask me.



With this exercise I finish the first part of the project based on computer vision.
We can do a lot of things more, blobs, our haar libraries, structural analyzes, movement studies, etc... but all this exercise are outside of this scope.

Thursday, March 14, 2013

Looking for a fish

I would  like go to dive with fishes, but its time to study, so I've take afford of my studies to catch the fish with a web cam.

So, how I can fishing with my web cam, and a plastic fish?

With this exercise I've checked the power of our cubieboard, its a merge between the previous exercises search a pattern and  how to use the web cam

The code is very similar to the search pattern, but with a small difference it's a video instead a photo, (video = photo1, ..... photo n ...., photo n+m)

The size of the video is 160x120, with bigger videos (320x240) it takes around 200 ms to found it

Here is the result, and I think is power enough for almost proposes




So with 33 ms to detect an object will be right for the simplest projects, but I've to try with Linaro, and with a new compilation of openCV.
I'll try to do this the next week wen I've a monitor with HDMI at last.

web cam and cubieboard

Well I'm back again with the computer vision, now with a webcam

The idea is to how fast is our cubieboard to do some exercise, as could be detect faces, or detect and object.

I've saved a couple of videos in different resolutions, they are saved from my desktop and you can see how the lag is working, it's not realistic exercise, but it can show as a little information.

First the code

#define WITH  640
#define HEIGHT 480

int main (int argc, char** argv)
{

Mat img;
bool salir=false;
double flagGrab,flagRetrieve, flag;


VideoCapture webCam(0);
//Change the size of the resolution (640x480,320x240,160x120)

    webCam.set(CV_CAP_PROP_FRAME_WIDTH,WIDTH);
    webCam.set(CV_CAP_PROP_FRAME_HEIGHT,HEIGHT);
    while ( waitKey(1) <= 0 && !salir)
    {
        flag=getTickCount();
        webCam.grab() ? :salir=true;
        flagGrab=getTick(flag);

      
        flag=getTickCount();
        webCam.retrieve(img);
        flagRetrieve=getTick(flag);
//here the code will start to work with the image
        imshow("Web Cam",img);

    }
return 0;
}

and here the result, the videos are a little bit bored

640x480
320x240
160x120

With this videos I'll try to make the exercise to detect a pattern, on a real time detection.


Tuesday, February 26, 2013

Search one image

How to search an image inside other image?


Well an interesting thing of computer vision is the possibility of search an image inside other, this process could be used in many systems to search objects, or count (but the images should be to similar and the environment must be under a strict control)

First I took the image to search, in this case is the insignia of doc Mac Coy, just for this exercise






turbo

Standard


Using gray images directly


In this case the load is a little bit longer in the libjpeg-turbo than the standard one, and the process of find are similar

Here is the code (in the opencv/doc or opencv/samples) there are more examples

To use the function matchTemplate ( imgSrc, imgPattern, .....) both images must be in grayscale, so we can load directly in gray and avoid this step, and we earn 10 ms.

This function, fetch the area were the pattern is in the source.

bool fastMatch (const Mat& _source, const Mat& _pattern,Rect* rectROI, double coincidence)
{

Mat source;
Mat pattern;

Size sourceSize;
Size patternSize;
Size imgResultSize;

Point maxLoc, pointRectROI;
double maxVal;

bool found = false;
// we can avoid this step if we load the image directly in gray
    cvtColor(_source,source,CV_BGR2GRAY);
    cvtColor(_pattern,pattern,CV_BGR2GRAY);

//We need to take the size of the images.
    sourceSize = source.size();
    patternSize = pattern.size();

    imgResultSize.width = sourceSize.width - patternSize.width + 1;
    imgResultSize.height = sourceSize.height - patternSize.height + 1;
    Mat imgResult(imgResultSize,CV_32FC1);
//Function that found the image
    matchTemplate (source, pattern, imgResult,CV_TM_CCOEFF_NORMED);
    minMaxLoc(imgResult,NULL,&maxVal,NULL,&maxLoc);
    maxVal *=100;
    if (maxVal >= coincidence)
    {
        *rectROI = Rect(maxLoc.x, maxLoc.y, patternSize.width, patternSize.height);
        found=true;
    }
    return found;
}

Monday, February 25, 2013

libjpeg vs libjpeg-turbo

With the first exercise I've the opportunity to compare libjpeg vs libjpeg-turbo.

Before to install libjpeg-turbo I made a backup with the standard one, so it's a great moment to compare both

The code used is the same in both cases (Contours) and here are the results

libjpeg-turbo

libjpeg

libjpeg libjpeg-turbo
Load 199 63
2Gray 61 45
Trheshold 12 12
Create 32 11
find 53 31
Draw 47 47
Total 404 209

Finally the libjpeg-turbo is faster than libjpeg, we will check in the future what happen with Linaro vs Raspbian.

By the moment I'll be working with Raspbian with libjpe-turbo

Saturday, February 23, 2013

Contours in openCV

Detecting contours.


The process of detecting contours is one of the simples.

Load the image -> Change to gray --> use the threshold function --> find the contour --> draw the contours

The code

//Load the image in colour
        flag=(double) getTickCount();
        imagen=imread(argv[1],CV_LOAD_IMAGE_UNCHANGED);
        flagLoad=getTick(flag);
//Change the image to gray
        flag=(double)getTickCount();
        cvtColor(imagen,imgGris,CV_BGR2GRAY);
        flag2Gray=getTick(flag);
//use the threshold to separate few objects
        flag=(double)getTickCount();
        threshold(imgGris,imgContorno,122,255,THRESH_BINARY);
        flagThreshold=getTick(flag);
//we need a place to leave the new iamge
        flag=(double)getTickCount();
        Mat dst = Mat::zeros(imgGris.rows,imgGris.cols,CV_8UC3);
        flagCreate=getTick(flag);

        namedWindow("gris",CV_WINDOW_NORMAL);
        namedWindow("contorno",CV_WINDOW_NORMAL);
        namedWindow("binario",CV_WINDOW_NORMAL);

        imshow ("gris",imgGris);
        imshow ("contorno",imgContorno);
        cvMoveWindow("gris",300,50);
        cvMoveWindow("contorno",600,50);

vector< vector<Point> > vecContornos;
vector<Vec4i>jerarquia;
//Find the contours
        flag = (double)getTickCount();
        findContours(imgContorno,vecContornos,jerarquia,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
        flagFind =getTick(flag);
        flag = (double)getTickCount();
        for (int idx=0;idx >=0;idx=jerarquia[idx][0])
        {
                drawContours(dst,vecContornos,idx,WHITE,5,8,jerarquia);
        }

The results 

And finally we have this results from the original image


Original image


Result image
 All the values are in milliseconds 

This exercise give us a useful information

The main information is the threshold to separate the right image (12 ms) and find the contours (31 ms) around 50 ms to detect objects, it's not really bad for a dive which could use this information.

But the structural analysis could be take more time, we will what happen in the next exercises 

Any comment will be appreciated

Friday, February 22, 2013

Raspbian + OpenCV + libjpeg-turbo

A few time I made the installation of openCV over Raspbian, but after speak at the Cubieboard community there is a library called "libjpeg-turbo" wich is faster than the standard one

I  read a lot of information about it, and as far I know, the Linaro distribution use it, (I've to do a double check)

But at this moment I can't take off the raspbian and I've to install the library separately

I found one article that explain what to do
How to compile  the OpenCV 2.4.0 with libjpeg-turbo
To build OpenCV 2.4.0 with libjpeg-turbo you need:
  1. build libjpeg-turbo as static library
  2. configure OpenCV with the following command:
    cmake -DWITH_JPEG=ON -DBUILD_JPEG=OFF -DJPEG_INCLUDE_DIR=/path/to/libjepeg-turbo/include/ -DJPEG_LIBRARY=/path/to/libjpeg-turbo/lib/libjpeg.a /path/to/OpenCV

But I'm a little bit out of training and find some questions.
So what I've to do ? I need a cook book, and I didn't find so I have to make one. (please any mistake let me know)

1º Download the libjpeg-turbo

Master of libjpeg-turbo :https://github.com/aumuell/libjpeg-turbo/archive/master.zip

To make the installation ( follow the installation)
2º Prepare the installation

unzip libjpeg-turbo-master.zip 
cd {source_directory}
autoreconf -fiv 
(note: if the autoreconf doesn't exist add it "sudo apt-get install dh-autoreconf")
mkdir {build directory}
#cd {build_directory} sh {source_directory}/configure [additional configure flags}
../configure --enable-static
 2-Bº After made the configuration we need to make the library static. (How I can make a static library :| ) -->
"-fPIC" which was an abbreviation for Position Independent Code, and this had 
to be passed to create library code objects, without that flag, code that is specific to 
the source would be used, and then the library would fail.

The  command "../configure --enable-static" will create some files
We have to edit the “Makefile” 
Locate the line CC = gcc 
and change it by CC = gcc -fPIC
sudo make
sudo make install.
After all this steps we'll have installed the libjpeg-turbo. 3º Now How to link our OpenCV to libjpeg-turbo ? Its easy we have to create again our CMAKE configuration
#I removed the python compatibility 
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_EXAMPLES=ON 
-DWITH_JPEG=ON -DBUILD_JPEG=OFF 
-DJPEG_INCLUDE_DIR=/path/to/libjepeg-turbo/include/ 
-DJPEG_LIBRARY=/path/to/libjpeg-turbo/lib/libjpeg.a /path/to/OpenCV .. 
 

When the CMAKE comand is done we will see somthing like that

- Detected version of GNU GCC: 46 (406)
-- Found JPEG: /opt/libjpeg-turbo/lib/libjpeg.a  
-- Found Jasper: /usr/lib/arm-linux-gnueabihf/libjasper.so (found version "1.900.1") 
-- Found OpenEXR: /usr/lib/libIlmImf.so
-- Looking for linux/videodev.h
-- Looking for linux/videodev.h - not found
-- Looking for linux/videodev2.h
-- Looking for linux/videodev2.h - found
-- Looking for sys/videoio.h
-- Looking for sys/videoio.h - not found
-- Looking for libavformat/avformat.h
-- Looking for libavformat/avformat.h - found
-- Looking for ffmpeg/avformat.h
----------------
-- 
--   Media I/O: 
--     ZLib:                        /usr/lib/arm-linux-gnueabihf/libz.so (ver 1.2.7)
--     JPEG:                        /opt/libjpeg-turbo/lib/libjpeg.a (ver 80)
--     PNG:                         /usr/lib/arm-linux-gnueabihf/libpng.so (ver 1.2.49)
--     TIFF:                        /usr/lib/arm-linux-gnueabihf/libtiff.so (ver 42 - 4.0.2)
--     JPEG 2000:                   /usr/lib/arm-linux-gnueabihf/libjasper.so (ver 1.900.1)
--     OpenEXR:                     /usr/lib/libImath.so /usr/lib/libIlmImf.so /usr/lib/libIex.so /usr/lib/libHalf.so /usr/lib/libIlmThread.so (ver 1.6.1)


make
#we have to wait a little bit (you can stop and continue later)

sudo make install 
 

Monday, February 18, 2013

Repeat the exercises

Next exercises 

A few time ago I made some exercises to study  computer vision, and I have to repeat them.
I made it with my computer I3 and 6 GB of RAM, I have to recognize that the code were a little bit dirty (not very efficient)

But if I can show the expected results, and I will give a better idea what I want to do with the cubieboard.

.-Detect and follow

This exercise consist in to take a part of an image, the eye in this case, and detect and follow.


.- Contours.

One of the most important this is to detect different contours, center of object.


.- Blobs
Blobs are pixels continuous quite similar to be the same piece








Saturday, February 16, 2013

I need a backup

How to do a backup?

Well we have our micro SD working fine, with all the libraries that wee need

As could be the OpenCV library, but I have to try to modify the compilation that I made in Cubieboard + openCV, and we have a small risk to make some mistake and the process to install should to start.

To avoid this problem, I am going to do a backup of the micro SD, it is quite simple in Linux. I don't know Windows, but if some can do it please leave a comment.

.- Take out the micro SD and with an USB adapter connect to the computer

ikaro@nirvana ~ $ dmesg
[ 2298.045043] sd 3:0:0:0: [sdc] 15548416 512-byte logical blocks: (7.96 GB/7.41 GiB)
[ 2298.050815]  sdc: sdc1 sdc2
[ 2298.053659] sd 3:0:0:0: [sdc] Attached SCSI removable disk

The card has two partitios

ikaro@nirvana ~ $sudo fdisk /dev/sdc

Disk /dev/sdc: 7960 MB, 7960788992 bytes
245 heads, 62 sectors/track, 1023 cylinders, total 15548416 sectors
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048      131071       64512    e  W95 FAT16 (LBA)
/dev/sdc2          131072    15548415     7708672   83  Linux

So we want to duplicate the micro SD

With all this information we can continue to clone / duplicate / backup

ikaro@nirvana ~ $ dd bs=1M if=/dev/sdc of=raspbery_Cubie.img

I have tried with bs=4M but the backup did not work.
This command take a few minutes.

But finally will have our backup from the micro SD

ikaro@nirvana ~/backup_cubieboard $ ls -lh
total 7,5G
-rw-r--r-- 1 ikaro ikaro 7,5G 2013-02-16 10:40 raspbery_Cubie.img.

Now we have to put this image into a new micro SD, at last of the same size and we can put the image into it

ikaro@nirvana ~ $ dd bs=1M  if=raspbery_Cubie.img of=/dev/sdc

And we will have the backup done.

I know it is a process simple, but it very useful

And if you want to make a NAND Flash backup it quite easy too, from raspbian.

 dd if=/dev/nand of=/some/place/with/enough/space

Friday, February 15, 2013

Spanish to English

¿Por qué cambio a ingles ?

Bueno, parece que en el futuro cercano me quedaré en paro, y el futuro está fuera de España, ya sea con los clientes en el extranjero o trabajando en una compañia.

Así que decidí practicar todo el inglés que pueda, y el mejor método es el de inmersión lingüística, así que trataré de hacer todo lo que pueda en inglés

Normalmente, en la tecnología el inglés es el vehículo de comunicación, estudio y desarrollo.
El inglés que voy a utilizar será sencillo de seguir ya que serán más que nada pruebas y resultados y prácticamente se dice casi siempre en inglés.

De todas formas seguiré respondiendo en español a quien me pregunte en español

Larga vida a la "ñ"

Why I am changing to English ?


Well in the near future seem I will be borke and I know that the future it is outside of Spain, could be as clients, or my self working in a company.

So I know that I have to practice all the English as I can, and the better method is linguistic immersion, I will try to do all most in English.

Normally in technologies the English is the vehicle of communication, studies , and develop.

My English will be quite simple and plane, cause it will be based in probe and results, and in most of cases they name came from the English.

Mean while, I will try to answer the English questions in English.
But please forgive me my mistakes.

Long live to the "ñ"

Sunday, February 10, 2013

Load an Image


First step with OpenCV (corrected)

Load an image, Lena is here.

I made this exercise a few weeks ago, but I had a mistake, and the obtained data were not correct, the first test gave me around 140 milliseconds to load and display an image.

To much time if we think that in one second of video, we could have till 30 images per second or even more; one image each 33 miliseconds.

Note: This comparative it is not real, the video has a different compression such as I-Frames (real images) and P-frames( predictive frames)


There are a more things to do as looking for libjpeg-turbo, change the Raspbian to Linaro

But the first thing to do is to correct the code, and separate each time in their different process: Load the image, display the image

This is the new code.

#include <cv.h>
#include <highgui.h>
#include "../00_include/tools.h"

using namespace cv;

int main (int argc, char** argv)
{
Mat imagen;
double flag,flagCarga,flagDisplay, tiempo;
char resultadoCarga[25];
char resultadoDisplay[25];
char resultado[25];
Size imgSize;
//flag of time
        flag =(double)getTickCount();
        imagen=imread(argv[1],CV_LOAD_IMAGE_UNCHANGED);
//calculate the time
        sprintf(resultadoCarga,"Load %2.f",getTick(flag));
        namedWindow("FOTO",CV_WINDOW_AUTOSIZE);
//Fetch the size
        imgSize = imagen.size();
        sprintf(resultado,"Size width=%d height=%d",imgSize.width,imgSize.height);
//New flag of time
        flag = (double)getTickCount();
        imshow("FOTO",imagen);
//calculate the time
        sprintf(resultadoDisplay,"Display %2.f",getTick(flag));
        printf("%s\n",resultadoDisplay);
//put the data on the image
        putText(imagen,resultadoCarga, Point(10,20),FONT_HERSHEY_SIMPLEX,0.5,BLUE,1);
           putText(imagen,resultadoDisplay,Point(10,35),FONT_HERSHEY_SIMPLEX,0.5,BLUE,1);
        putText(imagen,resultado,Point(10,50),FONT_HERSHEY_SIMPLEX,0.5,BLUE,1);
//Save the image
        imwrite("out.jpg",imagen);
        waitKey();
}


Well, with the correct software we have this values on the photo.


The values are:
.- Load 41 milliseconds
.- Display: 5 milliseconds

Well this values are no too bad, but it is not computer vision, we did not anything with the image as could be check the blobs, detect a face, and eye, some color, detect objects, etc.

Aprender sin reflexionar es malgastar la energía. Confucio (551 AC-478 AC).
Learning without thinking is labor lost. Confucius (551BC - 478 BC)

We can see that we have 41 miliseconds to charge an image, so we would check with libjpeg-turbo, and we will see if the load time get reduced, try to use and SATA HD.
 The display time was 5 milliseconds, if its computer vision, we do not need see the image, just the cubieboard has to "see" it and process. 

This exercise does not give us to much information cause is not related with computer vision.
I will have better information wen have time to make the exercises about computer vision as detect contours, geometry, detect faces, eyes, blobs (contiguous pixels with similar color), etc.

But one important thing is to try Linaro, but I can not do it at this moment, I have to wait a couple of weeks


There is a very interesting information that I learned at the Cubieboard community


getTick(flag)

 double getTick(double flag)
{
/*
This functions return the time in milliseconds
since the "flag" moment till now
*/ 
//Get the frequency  
double frecuencia = getTickFrequency() / 1000 ;
double t = (double)getTickCount();
return((double)t - flag)/frecuencia;
}


 



Wednesday, February 6, 2013

Siguiente paso: visión artificial.

Visión artificial 

Bueno llegados a este punto vamos a entrar en faena y vamos a darle un poco más de vida a la cubieboard.

 Cámara web

¿Cámara cara con muchos megapixeles o barata con pocos  megapixeles?

Queremos hacer que nuestro "sistema" pueda hacer algún tipo de reconocimiento y trabajo con imágenes, hasta ahí todo bien.

¿Pero como funciona una imagen en un ordenador ?
Una imagen no es más que una matriz de X*Y =megapixels
A mayor resolución mayor información pero información a nivel de visión redundante.
El punto menos fuerte de la cubieboard es la velocidad de procesamiento, así que trataremos de usar el menor número de pixels posibles, la imagen para nosotros puede perder calidad, pero para la cubieboard no le restará efectividad y si que permitirá mejor rendimiento.

Esta es mi cámara web

pi@raspberrypi ~ $ lsusb                                                                                                                                                          
Bus 003 Device 002: ID 046d:0819 Logitech, Inc. Webcam  C210                                                                                                            



¿Qué vamos a hacer en visión artificial ?

Esto no pretende ser un curso de visión artificial, sino una serie de ejercicios prácticos, donde se vean los ejemplos y como se comporta la cubie. Aunque si hay  dudas entre todos podemos tratar de resolverlas.

Queremos visión artificial en tiempo real, luego vamos a trabajar con imágenes de vídeo y no imágenes estáticas.

Este es el plan, que puede cambiar y si alguien tiene alguna duda, comentario, o aporte que se sienta libre de hacerlo
  1. Adquisición de imágenes
    1. Foto
    2. Vídeo
  2. Detección de contornos
  3. Geometrías
  4. Región de interés / (ROI)
  5. Detección de imágenes.
  6. Realidad Aumentada
  7. Blobs
El plan llevará cierto tiempo completarlo, y trataré de hacerlo en los dos próximos meses (si las complicaciones de la vida lo permiten)

Tuesday, February 5, 2013

Soy nómada

Soy nómada

¿Cómo puedo trabajar con la cubieboard?

Me he quedado sin un hogar fijo, soy nómada y vivo de la amistad, el viento y la wifi que me dejan mis amigos.

Solo tengo espacio para un portátil, no tengo teclado, ni ratón para la cubie, ¿y ahora cómo trabajo?.

Fácil pues por SSH me conecto.
Antes de quedarme sin monitor ni teclado, programé una IP fija en la cubie por el puerto ethernet.
Y con un cable cruzado ya nos podemos conectar.
La cubieboard saldrá a internet por la wifi, pero por ahora no la necesito.

ikaro@nirvana ~ $ ssh -X pi@192.168.0.50
pi@192.168.0.50's password:
Linux raspberrypi 3.4.19-a10-aufs+ #4 PREEMPT Fri Dec 28 23:40:52 CET 2012 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Jan 24 19:25:17 2013 from 192.168.0.20
pi@raspberrypi ~ $

ssh -X usuario@ip es para poder ejecutar aplicaciones gráficas.

Y tachan problema solucionado y ya volvemos a estar activos.

Ya adelanto que probé la webcam, y traté de hacer una detección de cara :)
y FUNCIONA pero tarda 3 segundos en detectar una cara, pero eso será en el siguiente post.

Saturday, January 19, 2013

Cubieboard + OpenCV


¿Qué es la visión artificial?

Cuando un ordenador procesa una imagen, no entiende lo que realmente se ve en ella.
Pero y si necesitamos detectar una persona, o una cara en concreto, o un objeto, o seguir un color, o encontrar una forma, interpretar lo que vemos, etc.
De todo esto se encarga la visión artificial.

¿Cómo aplicamos la visión artificial?

Bueno aquí es donde viene en nuestra ayuda OpenCV, es un conjunto de librerías que nos permite abstraer al programador de la parte más laboriosa de la programación y utilizar funciones ya preparadas para todas las necesidades que tengamos.
Hay posibles instalaciones para Linux, android, windows, ios

Instalando OpenCV


He sacado toda la información técnica de este blog, muy bueno, no solo por este artículo sino por muchos otros.

Fuente: http://mitchtech.net/raspberry-pi-opencv
Fuente: http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html#linux-installation

Raspbian : Linux raspberrypi 3.4.19-a10-aufs+
OpenCV: OpenCV-2.4.3.tar.bz2 (os recomiendo entrar y descargar la última versión)
http://OpenCV.org

¿Quizas antes de hacer la instalación deberías decidir si libjpeg o libjpeg-turbo?
May be if you are reading this you should to think libjpeg or libjpeg-turbo?
  1. ¿Dónde vamos a hacer la instalación?
    1. Tarjeta ¿Hay espacio?, Con 2 GB suficiente pero mejor una de 8GB (que luego ya iremos necesitando)

      S.ficheros     Tamaño Usados  Disp Uso% Montado en
      rootfs           7,3G   1,5G  5,5G  22% /


      NOTA: Se hace un poco lenta, pero funciona

      ¿Por qué sobre la tarjeta? En el futuro quiero que esta placa sea el "razonador lógico" (no me gusta la definición inteligencia artificial), además de portátil debe consumir poco y un disco duro por muy ligero que sea, tiene un consumo muy alto para baterías (1A a 5 V)
  2. Descargar la última versión
    1. Yo instale esta OpenCV-2.4.3.tar.bz2  de todas formas buscar la última versión.
  3. Dependencias
    Bueno para hacer la instalación necesitamos hacer nuestra máquina con una base de herramientas necesarias. Describir cada una de ellas se escapa de este blog, pero las más interesante es cmake que nos permitirá configurar nuestra compilación e instalación.

    sudo apt-get -y install build-essential cmake pkg-config libpng12-0 libpng12-dev libpng++-dev libpng3 libpnglite-dev zlib1g-dbg zlib1g zlib1g-dev pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools

    sudo apt-get -y install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-progs ffmpeg libavcodec-dev libavcodec53 libavformat53 libavformat-dev libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev libxine1-ffmpeg libxine-dev libxine1-bin libunicap2 libunicap2-dev libdc1394-22-dev libdc1394-22 libdc1394-utils swig libv4l-0 libv4l-dev python-numpy libpython2.6 python-dev python2.6-dev libgtk2.0-dev pkg-config
  4. Ahora vamos por pasos a instalar
    1. Nuestro directorio OpenCV con su versión OpenCV-2.4.3.tar.bz2

      tar -xvjpf OpenCV-2.4.3.tar.bz2
      rm OpenCV-2.4.3.tar.bz2
      cd OpenCV-2.4.3.tar.bz2
      mkdir build
      cd build

    2. Preconfigurando la compilación
      Con esto le diremos que nos compile lo que necesitemos, con este se crea una instalación bastante estándar, además de instalar el soporte para python


      cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D BUILD_EXAMPLES=ON ..

      make
      sudo make install

  5. Últimos pasos
    1. Unas configuraciones para indicar las bibliotecas y librerías.

      $ sudo vi /etc/ld.so.conf.d/opencv.conf

      Si no existe lo creamos y añadimos la siguiente línea

      /usr/local/lib

      Configuramos el enlace dinámico de las librerías

      $sudo ldconfig -v

    2. Configuramos el sistema de bashrc de modo global

      sudo vi /etc/bash.bashrc
      Y añadimos la siguiente línea

      PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig export PKG_CONFIG_PATH
  6. Probando OpenCV
    1. Nos vamos al directorio de ejemplos de C
      ~/OpenCV/OpenCV-2.4.3/build/bin

      convexhull

      kmeans

      drawing

Hull

Cluster


Dibujos



Siguientes pasos