Skip to main content

Flying around the Globe with Cesium and Your Head

This is a guest post by Xavier Bourry, co-founder and CTO of Jeeliz. Jeeliz is a web based computer vision solution that performs client side real-time analysis of webcam videofeed. This post is about using Jeeliz to make Cesium navigable with head movements. - Sarah

Principle of the experiment

This web experiment allows users to navigate around the Earth using only their heads. Movements are very intuitive:

  • It the user moves forward or backwards, the view is zoomed or zoomed out,
  • If the user turns to look to a direction, the view will follow.

We do not take account of the translation movements of the head along horizontal and vertical axis because these movements are less accurate and harder to control than the head rotation along these axis.

This is a screenshot video of the experiment. I go to Paris, Ile Saint Louis and the Eiffel Tower, then to New York:

You can try the live demo here.

Choice of technologies

For this project, we display the Earth with CesiumJS and detect and track the head with a webcam because almost everyone has one. We use Jeeliz Facefilter API, which requests the webcam through GetUserMedia API, then processes the video stream using a deep learning neural network running on the GPU with WebGL. This API is free, released by Jeeliz under the Apache 2 license, and you can browse its Github repository.

Dive into the code

Getting started

First, clone the Github Jeeliz FaceFilter repository.

We start with Cesium Hello World. Copy this code to an HTML file, and serve it locally through an HTTPS server. Just change the <base href> value in order to match the cloned github root path:

<!DOCTYPE html>
<head>
  <!-- PUT THE BASE OF THE CLONED JEELIZ FACE FILTER GITHUB AS BASE HREF: -->
  <base href="faceFilter/">
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, minimum-scale=1, user-scalable=no">
  <script src="libs/cesium/Cesium.js"></script>
  <style>
      @import url(libs/cesium/Widgets/widgets.css);
      html, body, #cesiumContainer {
          width: 100%; height: 100%; margin: 0; padding: 0; overflow: hidden;
      }
  </style>
</head>
<body>
  <div id="cesiumContainer"></div>

  <script>
    //initialize Cesium
    var CESIUMVIEWER = new Cesium.Viewer('cesiumContainer');
  </script>
</body>
</html>

It is very important to serve it through a secure HTTPS server, even locally. Otherwise most web browsers will not allow access to the webcam. If you do not have a local HTTPS server yet, you can try the Python Simple HTTPS server.

Importing HeadControls

Jeeliz Facefilter API has a helper built for controlling navigation with the head, called HeadControls. We first import the Jeeliz FaceFilter API and HeadControls into the <head> part of the HTML code:

<script src="dist/jeelizFaceFilter.js"></script>
 <script src="demos/shared/HeadControls.js"></script>

Building the user interface

In the <body> part of the HTML code, after the CesiumJS div container, we add:

  • a button to start the head navigation
  • a canvas where the Jeeliz Face Filter API will be initialized. It will display the user’s mirror view, with the head detection frame. This view is important because the user can immediately understand why the tracking does not work well, e.g., if the lighting is very poor, or if there is a piece of tape masking the webcam, or if the user is wearing a fancy mask.

Concretely, we add this code:

<button id='startHeadControlsButton'>START HEAD CONTROLLED NAVIGATION !</button>
<canvas id='headControlsCanvas' width='512' height='512'></canvas>

CSS Styling

 /* Since we want to use Cesium Viewer fullpage, we set container size to 100% */
    html, body, #cesiumContainer {
        width: 100%; height: 100%; margin: 0; padding: 0; overflow: hidden;
    }

     /* head control button will be placed at top and horizontally centered over Cesium Viewer */
    #startHeadControlsButton {
        position: fixed;
        width: 256px;
        top: 1em;
        left: 50%;
        margin-left: -128px;
        cursor: pointer;
    }

    /* head control canvas will be at right bottom and over Cesium map */
    #headControlsCanvas {
        position: fixed;
        right: 0px;
        bottom: 30px;
        z-index: 1000;
        transform-origin: bottom right;
        max-height: 25%;
        max-width: 25%;
      }

Let’s go chase heads!

We get the start button from the DOM, and when the user clicks on it, we initialize HeadControls. This code goes in the inline <script> part, just after CESIUMVIEWER initialization:

var DOMbutton = document.getElementById('startHeadControlsButton');
DOMbutton.onclick = function(){
  HeadControls.init({
    canvasId: 'headControlsCanvas',
    callbackMove: callbackMove, //will be explained later...
    callbackReady: function(errCode){
      if (errCode){
        console.log('ERROR : HEAD CONTROLS NOT READY. errCode =', errCode);
      } else {
        console.log('INFO : HEAD CONTROLS ARE READY :)');
        HeadControls.toggle(true);
      }
    },
    NNCpath: '../../../dist/' // parent folder of NNC.json (neuron network JSON)
  }); //end HeadControls.init params
}; //end DOMbutton.onclick

Binding head movement with CesiumJS camera movement

HeadControls initializes the Jeeliz Face Filter API and launches callbackReady when it is ready or if an error happens. The callback function callbackMove will be called each time HeadControls detects a head movement, with one argument, an object having the following properties:

  • <float> dZ: elementary movement along depth axis (if we should zoom or zoom out),
  • <float> dRx: elementary rotation of the head around the vertical axis (look up/down),
  • <float> dRy: elementary rotation of the head around the horizontal axis (look left/right)

We declare the callbackMove function. It binds the elementary movements provided by HeadControls to the CesiumJS camera movements:

// we group the settings; it is cleaner :)
var SETTINGS={
  zoomSensibility: 5.5,
  panSensibility: 0.00000015
};

function callbackMove(mv){
  var cameraHeight = CESIUMVIEWER.scene.globe.ellipsoid.cartesianToCartographic(CESIUMVIEWER.camera.position).height / 1000.0 || Number.MAX_VALUE;
    
  if (mv.dZ !== 0) { //move head forward/backward
    var zoomAmount = mv.dZ * SETTINGS.zoomSensibility * cameraHeight;
    CESIUMVIEWER.camera.moveForward(zoomAmount);
  }
  if (mv.dRx !== 0) { //turn head up-down
    var panAmountX = SETTINGS.panSensibility * mv.dRx * cameraHeight;
    CESIUMVIEWER.scene.camera.rotateUp(panAmountX);
  }
  if (mv.dRy !== 0) { //turn head left-right
    var panAmountY = SETTINGS.panSensibility * mv.dRy* cameraHeight;
    CESIUMVIEWER.scene.camera.rotate(Cesium.Cartesian3.UNIT_Z, panAmountY);
  }
}

Conclusion

That’s it! This is a tiny, simplified version of the live demo. We are going to add detection of the opening of the mouth to the Jeeliz Face Filter API soon. It will be a nice opportunity to make new fun experiments! Follow us on Twitter @StartupJeeliz to stay informed.