[0: Requirements]

Before performing the steps presented here, please install the robotreality software as described on the page "Download, Building and Installation".

This page will guide you through the process of setting up and running the robotreality motion capture software.
Please note that you do not need to have the hardware up and running, this tutorial will use a pre-recorded motion capture dataset that allows offline configuration.

[1: Preperations]

Before youcontinue with this tutorial please download the motion capture dataset example from our server:

/tmp$ wget http://opensource.cit-ec.de/attachments/download/155/dataset_my_name_is_flobi_2013.04.25.tar.gz
/tmp$ tar xzf dataset_my_name_is_flobi_2013.04.25.tar.gz
/tmp$ cd dataset_my_name_is_flobi_2013.04.25
/tmp/dataset_my_name_is_flobi_2013.04.25$ ls
head_data.imu  images  mocap_configured.mcfg  mocap_unconfigured.mcfg

The directory should now contain the following folder/files:
  • images = images captured during motion capture
  • head_data.imu = pan/tilt/roll angles from the imu sensor
  • mocap_configured.mcfg = a robotreality configuration file
  • mocap_unconfigured.mcfg = another config, this time without fine tuning

For advanced users you can also create a dataset on your own. Please refer to the Offline Data Collection chapter.

[2: Test your Robotreality Mocap setup]

In order to test your setup we will now call robot reality with the fully set up config file:

/tmp/dataset_my_name_is_flobi_2013.04.25$ robotreality_mocap mocap_configured.mcfg

If everything was set up correctly you should get a similar output:

robotreality_mocap mocap_configured.mcfg 

| robotreality live mocap tracking            |

NOTE: intended to be used with an USB ptGrey FireflyMV!

> loading config from 'mocap_configured.mcfg'
> reading imu dump file './head_data.imu'
> updating lookuptable: processing [4] samples
> ######################### 100% done
> stats: DONTCARE[10977363] RED[5799853] GREEN[  0] BLUE[  0]
 -> finished.
> blobfinder: (re)allocating storage for 640 x 480 image
> blobfinder: (re)allocating storage for 640 x 480 image
> init DataOutputCSV()
> saving CSV data to '/tmp/robotreality_dump.csv'
init done 
opengl support available 
> setting mouse handler
> starting tracking
> waiting for frame grabbing to begin
> blobfinder: (re)allocating storage for 640 x 260 image
> blobfinder: (re)allocating storage for 640 x 259 image

Klick on the window and press the 'x' key in order to exit the application.

[3: Robotreality Mocap Keyboard shortcuts]

Robotreality is configured using a GUI dialog. However there are some Keyboard shortcuts:

'x' = exit application
's' = save config to /tmp/cfg
'e' = clear color filter 
'0' = select colorclass zero (=background)
'1' = select colorclass marker 

[4: Marker filter setup]

The filtering and Marker detection is based on a distance measure in the normalized RG Colorspace.
The setup is quite simple and very robust to light changes.

Start robotreality with the unconfigured config file:

/tmp/dataset_my_name_is_flobi_2013.04.25$ robotreality_mocap mocap_unconfigured.mcfg

A GUI will open:

Now we will configure the pixel filter. First click into the image and press 'e' in order to erase all filtersettings.
Now press '0' on the keyboard to select the "don't care" colorclass. Click on 3-4 image regions which are NOT
the marker (e.g. at the positions indicated in orange circles in the screenshot above). This process tells the filter which colors we are not interested in. You have to wait for 1-2s between the clicks or watch the console output to see when the processing is finished.
Now press '1'. This selects the Marker Colorclass. Click on 1-2 green markers.

You can now view the results of the marker filter settings by:
Click on the Properties Window Button [1], this will open the following Dialog:

Now activate the buttonbox "show normalized rgb". This will change the face view to a similar view like the following two images:

Your result sould look like the image on the left. If something went wrong and your image looks more like the right one, you will either have to add more colorsamples to the don't care class (press '0' and klick in the missclassified red regions) or start all over again.

[5: Head Zero Pose]

In order to set up the "zero" Gaze Position you will have to find a timeframe where the person is looking straight and all head axes are in the zero degree state. Addionally the mouth should be in a resting zero position. During normale use you tell the person wearing the helmet to look straight and press the "calc head zero" [12] Button. For the pre-recorded dataset we will wind to frame 30 using the frame [3] slider and press this button [12].

Clicking this button will initialize and configure a variety of settings:
  • head zero pose [2][3][4]
  • mouth zero position [8]
  • image rotation compensation based on the nose marker [7]
  • eye zero rotation

You will now see the model and reults fitted to the face:

[6: Eye Tracking]

The next step is used to configure the eyetracker. Please note that the current eyetracking algorithm is based on simple thresholding and is therefore sensitive to lighting and does not work for all iris colors.

In order to set up the Eyetracker click on "show eyefinder results" checkbox in the Properties Window. This brings up two views of the tracking results:

Now Adjust the slider threshold [5] in a way that the red marker area corresponds to the iris. Use the frame [3] slider to check the settings for several frames. It is not important that the whole iris area is marked red, a result as shown above is sufficient. However it should never exceed the iris too much. You can now close the Eyetracker results by clicking "show eyefinder results" again.

In the next step we will adjust the Iris Diameter: Move slider "eye radius x" [7] so that the cyan circle matches the iris size. Set "eye radius y" [8] to the same value. See the image in [5: Head Pose Zero] for a good Setting.

If you are curious you can now check "live" in order to activate the live playback. You can stop the playback by clicking on the button again, rewinding and seeking is done with the frame [3] slider.

[7: Mouth Tracking]

In this step we will set up the mouth tracking. The translation of the human mouth is done using 6 marker positions. During initialization of the head zero pose we store the neutral mouth position. In consequent frames this zero position is used to track the movement of the lips. The values printed in white next to the mouth markerframe are the output positions in mm measured from the nose.
The two sliders "mouth scale" [13] and "mouth offset" [14] can be used to attenuate or amplify respectively adjust the position by adding an offset. For the playback on our Robot these values are dependant on the person beeing tracked. The sliders are adjusted so that the full span of human motion can be mapped to the limited robot movement range.

[8: Performance & Output]

In order to reach the maximum framerate during live capture the GUI update has to be disabled. This is done by unchecking "show results". This is mandatory for stable and high framerates (60Hz)!

In order to enable the data output to the csv file or to the real robot (only available if compiled with Flobi support).
In order to write a CSV with all the tracking results do:
  1. uncheck the "live" checkbox
  2. rewind to Frame 0
  3. check the "data output enabled" checkbox
  4. check the "live" checkbox
  5. (optional) uncheck "show results" (=faster processing)

The data is written to /tmp/robotreality_dump.csv in a CSV Format. All values represent angles in degrees or deflection in mm (for the mouth). When compiled with flobi support you can optionally deactivate the transmission of single joints.

[9: Additional Settings]

For a basic setup you should not need the following settings.

  • re-fit model [1]: This button can be used to re-initialise the face tracking model. If the tracking is lost during a live motion capture and playback session you can click this button in order to re-initialise the marker model. This can recover from swapped markers etc. Usually this should not be necessary as the model is quite stable now.
  • set eye-zero [2]: This is used to reset the gaze direction to zero. In order to use this the subject should look straight when clicking on this button.
  • reset framecounter [4]: Clicking this button resets the Framecounter to 0 (same effect as moving the frame slider [3] to the left)

[10: Debug View Settings]

These settings have only an optical effect. They make it more convenient for the user to look at the data.

  • rotate view: This activates the rotation correction for the image display
  • normalize view histogram: Do histogram normalization for the input image (only for the userview)
  • show normalized rgb: show the image converted to the normalized RG Colorspace
  • show pixelfilter results: if in normalized rgb mode the filter results are overlayed on the image
  • show eyefinder results: show the result of the eyefinder
  • show model results: overlay results of the marker model
  • show blobinfo text: overlay the name of the marker
  • visualize output: overlay output data on image (graphically and text)