Welcome to my Career Portfolio

Last updated on 18th August, 2013 by Shervin Emami. Posted originally on 12th September, 2009.

Below is my portfolio of projects that I have worked on, and the left side menu links to my Draw3D software and Computer Vision programming tutorials. If you are interested in working with me then feel free to Contact Me / About Me.

My current Resume / CV is available to download here: Resume.pdf

My Masters thesis in Robotics is available to download here:
'A Framework for the Long-Term Operation of a Mobile Robot via the Internet': MastersThesis.pdf

 
Humanoid robot

Talking Humanoid Robot with the Most Realistic Face in the World

Duration: 5 months full-time (Dec 2008 - May 2009)
Details: [Hanson Robotics]

I was in charge of getting a custom-designed untested 62 Degree-Of-Freedom humanoid robot to speak & listen in English & Arabic, learn & recognize faces, look towards interesting items such as faces or moving regions, perform realtime video stabilization, and perform basic human motions with its face, head & arms (including lip-synced speech in English and Arabic). We also re-dressed the robot as a fake military guard at the IDEX Defense Expo (Abu Dhabi, Feb 2009).

My work involved writing USB drivers for the new digital servo motors (that it seems noone else in the world had used before), debugging the many electrical noise & cabling & power & heat issues in the robot (since the robot had never been tested before), creating animation scripts to move all 62 motors & speech in synchronisation, and I developed the robot's computer vision, providing video stabilization and visual saliency tracking so it could follow a person's face or hands.

Humanoid body and hands

 
Tbot balancing

A Reconfigurable 140kg Balancing Robot for the US Military

Duration: 15 months full-time (Jul 2006 - Sep 2007)
Details: [Info, Photos, Videos],   [US Patent #20080105481],   [Repurposed as a Robocop]

We designed & built a human-sized robot that could balance on 2 wheels (like a Segway robot that can balance by itself) and climb up steps and fall from 3ft and continue driving, and also change to a 4 wheeled mode (like a car), as it was designed for rugged military operations.

I was in charge of the embedded software development (programmed entirely in hard-realtime Java), some of the electronics, testing the robot's operation, and creating the stair-climbing algorithm. The robot contained 10 motors (including 4 highly efficient Harmonic Drive gearboxes), a custom active suspension system, and weighed 300lbs (140kg).

Tbot car and A-frame modes

 
Face Preprocessing

3D Faces From 2D Photos, & Image Processing Difficult Facial Images

Duration: 2 years part-time (Sep 2009 - July 2011)
Details: [nViso]

Image processing of difficult facial images:

First I created a face image preprocessing library that can take low quality photos with uneven lighting and convert them to images where the face is much clearer. This happens by detecting the face and eyes in the image using several combined methods, then performing Histogram Fitting to standardize the brightness of the face across left and right sides, and then overlaying the processed face over the face in the original image. In the image on the right, notice that the faces are hard to see in the original images, and one side of the face is possibly brighter than the other, whereas in the overlayed image, not only is the face much clearer, but the left and right sides of the face are equally bright.

Generating a 3D face from a 2D photo:

3D orientation from a 2D face
After my face preprocessing library had prepared an image, nViso used Active Appearance Models (AAM) to estimate the 2D position of ~100 points on the face in the image. My software then used the POSIT algorithm to estimate the 3D position & orientation of the face relative to the camera (as shown above). It then used this 3D pose to transform the 2D face image onto a generic 3D face model with ~100 corresponding points. It then allowed rotating the face in 3D, and applying artificial lighting from any 3D locations (as shown below). I also developed the 3D interactive GUI shown above (using OpenGL directly). Since my code would generate an image of a face from any desired orientation and any desired lighting conditions, nViso then used this as part of an emotion recognition system for the advertising industry.
3D face from a 2D photo

 
RatSLAM robot

Intelligent Robot that Recharges its Own Batteries and is Controllable Through the Web

Duration: 2 years full-time (spread between Jan 2005 - May 2009)
Details: [Official website], [Published research paper]

For my Masters Degree in Robotics Research, I created an interactive website that lets anyone in the world control a complex mobile robot in Australia from their web browser. I also created a hardware & software system that lets the robot search for its battery charger and recharge its own batteries whenever they are low, therefore being available on Internet continuously. Ironically, the "long-term robot" was dismantled once I left the lab!
RatSLAM Web Interface

 

Draw3D Freeware 3D Modeller

screenshot

Duration: 5 years part-time (Jan 1999 - Aug 2003)
Details: [Draw3D website]

I created a complete 3D Modelling application for Windows. I designed and implemented the entire software on my own, to gain experience in C & x86 Assembly language when applied to 3D graphics using software rendering, where efficiency and performance are critical, yet an intuitive Graphical User Interface is also vital. It is composed of more than 30,000 lines of code, and renders 3D graphics in software faster than most commercial packages, and has been downloaded from the internet by over 120,000 people worldwide.

 
FaceBot robot

Face Recognition on a Robot using Stereo Camera or FaceBook Photos

Duration: 9 months full-time (Jun 2008 - Apr 2009)
Details: [Published research paper]

I created a realtime Face Detection and Face Recognition system that can be trained &/or tested using the Facebook social network. I modified the OpenCV Haar Detector to detect faces correctly in 80% of typical Facebook photos, despite the enormous range of lighting conditions, angles and facial expressions found in realistic Facebook tagged photos compared to traditional fixed environment photos that are typically used for Face Detection and Face Recogition systems.

I used the Face Detection and Face Recognition system on a robot (part of a Microsoft funded project) that would try to be friends with people. It has conversations (in English) with the person, while training itself to learn their face from camera snapshots, as well as their tagged Facebook photos. When the robot met the person again on a different day, it might recognise the person from its camera and Facebook photos, and chat about what it has done since they last met.

Face Detection

 
AR on desktop

Augmented Reality on desktop & iPhone

Duration: 6 months full-time (June 2010 - Dec 2010)
Details: [Ethervision]

First I created a 3D Augmented Reality program for desktop (shown on the right) that overlayed a 3D wireframe or model using OpenCV and OpenGL. To improve reliability in different lighting conditions, if it couldn't detect a marker then it would retry with one of 6 different sets of parameters or algorithms, and at 3 different scales, therefore detecting the marker in most cases. To improve speed, it assumed the parameters and scale for detection and approximate position & size of the marker wouldn't change much between frames.

I later ported it to an iPhone 3GS (shown below), with the goal of creating the fastest Augmented Reality app on the market. To achieve this, I used ARM NEON SIMD Assembly optimizations, as well as an efficient camera pipeline by processing YUV420 images directly and performing operations like resize & rotate & color conversion as part of my NEON optimized pipeline.
AR on iPhone 3GS

 

A Robot that Mimicks a 2yr old Child by Adapting to its Surroundings

aibo robot

Duration: 5 months part-time (Feb 2006 - Jul 2006)
Details: [Published research paper]

I worked with a Professor of Psychology to program a Sony AIBO robot dog that learns to interact with its environment, rather than be pre-programmed how to behave. We used a Reinforcement-Learning Neural Network to allow the robot to learn whether something makes noises or vibrates or can be moved or is too heavy. The neural network "brain" ran on a PC using Java, and wirelessly communicated with the low-level code inside the robot.

 
jaycarRobot

Obstacle-Avoiding Talking Robot

Duration: 2 months full-time (Dec 2003 - Feb 2004)
Details: [More photos]

I created a self-contained robot that can drive around a room while avoiding many obstacles by using its Infrared sensors, Ultrasonic sonar, Human speech module and Atmel microcontroller.