Cesium ion

Getting started

This step-by-step guide will help you configure and deploy Cesium ion on Kubernetes. While we expect you to already be experienced with running Kubernetes workloads, this guide is written to take you from zero to Cesium ion with minimal prior knowledge. If you’re updating an existing installation, see Upgrading from a previous release.

What’s included

  • startHere.html - Documentation (this file)

  • cesium-ion/ - Helm chart

  • images/cesium-ion.tar - Container image for the Cesium ion API

  • images/cesium-ion-asset-server.tar - Container image for the Cesium ion asset server

  • images/cesium-ion-frontend.tar - Container image for the Cesium ion user interface

  • images/cesium-ion-tiling.tar - Container image for the Cesium ion 3D tiling pipeline

  • images/cesium-ion-job-watcher.tar - Container image for the job watcher to detect abnormally terminated jobs

  • images/postgresql.tar - Container image for PostgreSQL

  • scripts/ - Scripts that allow you to run the tiling pipeline from the command line.

  • restApi.html - REST API reference

  • sampleData/ - Sample data used throughout the documentation

  • thirdParty/ - Additional files used by the documentation

System requirements

Any x86-64 compatible system capable of running Kubernetes can also run Cesium ion. Requirements for production workloads will depend directly on your use case and Cesium ion performance will scale with additional CPU, RAM, and storage performance. For local development, we recommend a computer with the following minimum properties:

  • An x86-64 compatible processor

  • 8 or more CPU cores (see Configuring Resources)

  • 32GB of RAM or greater.

  • At least a 32GB of volume storage

Additionally, to follow this guide you will need sudo, admin, or similarly elevated permissions as well as a tool for working with container registries, such as Docker or podman.

Kubernetes has a diverse set of configuration options. The first time you install Cesium ion we recommend following this guide closely to avoid introducing uncertainty in the setup process. Once you are comfortable with the Cesium ion Helm chart and configuration, you can further customize it to your specific needs.

Installing microk8s

This guide uses microk8s, a lightweight and easy to configure Kubernetes implementation meant for local development. If you have an existing Kubernetes cluster you would like to use, you will need to update the supplied commands to those available with your Kubernetes installation. Skip to the Importing Images section if you are not installing microk8s.

  • Linux

  • Windows

Run the below command to install microk8s using snap.

sudo snap install microk8s --classic

Update permissions and configuration

Once installed, run the below command to add yourself to the microk8s user group

sudo usermod -a -G microk8s $USER

You also need to create a .kube directory in your home folder to store microk8s configuration. Run the following commands:

mkdir ~/.kube
sudo chown -f -R $USER ~/.kube
sudo microk8s config > ~/.kube/config

Since we made changes to your group settings, you must log out and log back in before continuing.

We recommend using WSL2 to set up a microk8s environment. Use the official provided WSL2 microk8s installation instructions from: https://microk8s.io/docs/install-wsl2

Update permissions and configuration

Once installed, run the below command to add yourself to the microk8s user group

sudo usermod -a -G microk8s $USER

You also need to create a .kube directory in your home folder to store microk8s configuration. Run the following commands:

mkdir ~/.kube
sudo chown -f -R $USER ~/.kube
sudo microk8s config > ~/.kube/config

Since we made changes to your group settings, you must close out of WSL2 and run it again for the changes to take place.

When working with WSL2, you can easily access files on your root drive but you must use Unix style paths. For example, if your root drive is C:\ you can access files at \mnt\c

Verify installation

Verify the installation by running the following command:

microk8s status --wait-ready

This should produce output similar to the following. The important part is "microk8s is running". If you receive an error, review the Installing microk8s section.

microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    ingress              # (core) Ingress controller for external access
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
    registry             # (core) Private image registry exposed on localhost:32000
  disabled:
    cert-manager         # (core) Cloud native certificate management
    community            # (core) The community addons repository
    dashboard            # (core) The Kubernetes dashboard
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Load balancer for your Kubernetes cluster
    minio                # (core) MinIO object storage
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorization
    storage              # (core) Alias to hostpath-storage add-on, deprecated

Verify required features

The default Cesium ion configuration requires dns, helm3, ingress and registry to be enabled. If they are not shown as enabled in the output from the previous command, run the following commands:

microk8s enable dns
microk8s enable helm3
microk8s enable ingress
microk8s enable registry
microk8s config > $HOME/.kube/config

Microk8s installs kubectl and helm. You can run them as microk8s kubectl and microk8s helm to administer the cluster.

Importing images

Run the below commands to import the images into the microk8s registry add-on. The registry is created at localhost:32000. Importing these images may take a few minutes for each step. If you are using podman or other Docker alternative, be sure to update the commands for your tooling.

All commands throughout this documentation are assumed to be executed from the top-level directory where you unpacked the zip, i.e. the directory containing startHere.html.
docker image load --input images/cesium-ion-asset-server.tar
docker tag cesiumgs/cesium-ion-asset-server:1.4.0 localhost:32000/cesium-ion-asset-server:1.4.0
docker push localhost:32000/cesium-ion-asset-server:1.4.0

docker image load --input images/cesium-ion-tiling.tar
docker tag cesiumgs/cesium-ion-tiling:1.4.0 localhost:32000/cesium-ion-tiling:1.4.0
docker push localhost:32000/cesium-ion-tiling:1.4.0

docker image load --input images/cesium-ion.tar
docker tag cesiumgs/cesium-ion:1.4.0 localhost:32000/cesium-ion:1.4.0
docker push localhost:32000/cesium-ion:1.4.0

docker image load --input images/cesium-ion-frontend.tar
docker tag cesiumgs/cesium-ion-frontend:1.4.0 localhost:32000/cesium-ion-frontend:1.4.0
docker push localhost:32000/cesium-ion-frontend:1.4.0

docker image load --input images/cesium-ion-job-watcher.tar
docker tag cesiumgs/cesium-ion-job-watcher:1.4.0 localhost:32000/cesium-ion-job-watcher:1.4.0
docker push localhost:32000/cesium-ion-job-watcher:1.4.0

docker image load --input images/postgresql.tar
docker tag bitnami/postgresql:15.4.0-debian-11-r0 localhost:32000/postgresql:15.4.0-debian-11-r0
docker push localhost:32000/postgresql:15.4.0-debian-11-r0

Troubleshooting importing images

Command "docker" not found

If you are running under WSL2, you will need a docker client installed inside of the container, not the host Windows machine. If needed, run the below commands inside of the WSL2 container to install and configure docker.

sudo apt update
sudo apt install docker.io
sudo usermod -a -G docker $USER
exec su -l $USER
Insecure registries

On some versions of Docker, you may receive an error regarding insecure registries. If this happens to you, configure the Docker daemon to allow this action, create or edit /etc/docker/daemon.json to include the following setting:

{
  "insecure-registries" : ["localhost:32000"]
}

You also need restart docker using the below command for the changes to take effect:

sudo systemctl restart docker

License configuration

Cesium ion requires a license, which is configured at the top of cesium-ion/values.yaml. By default it will be an empty string:

license: ""

Install your license by performing the following steps:

  1. Download your license file from https://cesium.com/downloads

  2. Open the license file in a text editor and copy the entire contents into the clipboard

  3. Open cesium-ion/values.yaml in a text editor.

  4. Paste the contents into the license string at top of the file

Volume configuration

The default Cesium ion configuration stores all stateful data across five volumes:

  • cesium-ion-sources - User uploaded source data

  • cesium-ion-assets - Processed source data served as Cesium ion assets

  • cesium-ion-stories - Images and other media uploaded as part of Cesium Stories

  • cesium-ion-postgresql - PostgreSQL database containing accounts, Cesium Stories, and asset metadata.

  • cesium-ion-archives - Processed full asset archives and processed clip and ship output

While Kubernetes has a myriad of storage options, we will use local persistent volumes for ease of setup on a single machine. Follow the below steps to configure them:

When editing cesium-ion/values.yaml on Windows, file paths should be from inside the WSL2 shell. For example, paths to your C:\ drive need to start with /mnt/c/. Additionally, the postgresql volume must live inside of the WSL2 VM, for example at /home/$USER/postgres, and can not reside under /mnt/c.
  1. Run microk8s kubectl get nodes and copy the name of the node. A node in Kubernetes refers to the hardware where the pod is run. In this case your machine will be the only node.

  2. In cesium-ion/values.yaml find the localPersistentVolumes section

  3. For node, replace # REQUIRED: Name returned by "kubectl get nodes" with the node name from step 1.

  4. Create a directory for your assets.

  5. Under assets, replace # REQUIRED: Path to output data on your filesystem. with the absolute path to the directory you created in step 4.

  6. Create a directory for your source data. This must be different from previous directories.

  7. Under sources, replace # REQUIRED: Path to input data on your filesystem. with the absolute path to the directory you created in step 6.

  8. Create a directory for your stories images. This must be different from previous directories.

  9. Under stories, replace # REQUIRED: Path to stories images on your filesystem. with the absolute path to the directory you created in step 8.

  10. Create a directory for your archvies. This must be different from previous directories.

  11. Under archives, replace # REQUIRED: Path to archives data on your filesystem. with the absolute path to the directory you created in step 10.

  12. Create a directory for the PostgreSQL database. This must be different from previous directories.

  13. Under postgresql, replace # REQUIRED: Path to postgres data on your filesystem. with the absolute path to the directory you created in step 12.

When you are done the localPersistentVolumes section should contain all the information you need for your install.

localPersistentVolumes:
  enabled: true
  node: [RESULT OF "get nodes" FROM STEP 1]
  assets:
    enabled: true
    path: [DIRECTORY FROM STEP 4]
    capacity: 32Gi
  sources:
    enabled: true
    path: [DIRECTORY FROM STEP 6]
    capacity: 32Gi
  stories:
    enabled: true
    path: [DIRECTORY FROM STEP 8]
    capacity: 32Gi
  archives:
    enabled: true
    path: [DIRECTORY FROM STEP 10]
    capacity: 32Gi
  postgresql:
    enabled: true
    path: [DIRECTORY FROM STEP 12]
    capacity: 32Gi

PostgreSQL

Except in advanced use cases, Cesium ion requires a PostgreSQL database. To make initial configuration easier, the Cesium ion Helm chart includes a subchart packaged by Bitnami with a preconfigured user and clear text password. This provides basic configuration for local development, but is not configured for production use.

Once you are comfortable with configuring and installing Cesium ion, you have two options:

  1. Connect Cesium ion to your own, externally managed database. See External PostgreSQL configuration in the "Additional Configuration" section.

  2. Configure the bundled subchart for production use by referring to its official page on ArtifactHub.

We will continue with the default configuration for this tutorial, but remember that the default configuration is meant for getting up and running with minimal effort. Properly configuring a PostgreSQL database for production use is your responsibility and outside the scope of this document.

Configuring Resources

By default Cesium ion will require 6.5 CPU cores to run the servers and a job monitoring POD. Additionally if you are tiling your own data, each job will use 2 cores.

If you want to run a development setup with fewer CPU cores, you need to update the resources subsection under assetServer, apiServer and frontendServer to collectively be less than your desired number of cores.

For running a production setup, it is recommended to increase tilingJob resources to 4 CPU cores. If available, assetServer and apiServer resources would benefit from being set to 4 CPU cores as well.

Running the install command

Your cesium-ion/values.yaml file should now have a complete and valid configuration. Install the chart into a new cesium-ion namespace by running the following command:

microk8s helm install cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion --create-namespace

This process takes about a minute. There is no requirement to use a specific namespace and we are simply following best practices. Once installation is complete, you should see output similar to the below.

NAME: cesium-ion
LAST DEPLOYED: Sun Nov 19 11:31:17 2023
NAMESPACE: cesium-ion
STATUS: deployed
REVISION: 1
NOTES:

The above indicates success. If you received an error instead, run microk8s helm uninstall cesium-ion --namespace cesium-ion to ensure any partially installed components are removed. Then review this section to ensure you didn’t miss a step and try again.

Once the output is successful, the NOTES: section will contain three commands to retrieve the URL of the application. These commands do not mention microk8s so copy and run the correct version below:

export NODE_PORT=$(microk8s kubectl get --namespace cesium-ion -o jsonpath="{.spec.ports[0].nodePort}" services cesium-ion-frontend)
export NODE_IP=$(microk8s kubectl get nodes --namespace cesium-ion -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT

And the output will be a fully qualified URL of the application:

http://10.152.183.244:8080

Visit this URL and the Cesium ion user interface should load. It will look similar to the below image:

If Cesium ion fails to load, uninstall the application and review the above steps again. If your license is expired or invalid, you will instead be redirected to a licensing page with additional information.
Cesium ion Self-Hosted Running with the default configuration
Figure 1. Cesium ion Self-Hosted Running with the default configuration

Verification

After loading the application, you can perform a few basic tasks to ensure everything is working correctly.

  1. Using a file manager, open the sampleData folder included in the installation zip

  2. Drag and drop House.laz into the browser window.

  3. Cesium ion should detect that you are uploading a Point Cloud, click Upload

  4. The asset should upload successfully and you will see an entry appear for it on the My Assets page. Progress information will appear in the preview window when the asset is selected.

  5. Once tiling completes, the asset will load. In this case it’s a small point cloud of a House.

Cesium ion Self-Hosted after tiling House.laz
Figure 2. Cesium ion Self-Hosted after tiling House.laz

After the initial install, additional changes to cesium-ion/values.yaml will not take effect unless you run the helm upgrade command after saving the modified file:

microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

Next steps

Congratulations, you now have a working installation of Cesium ion running on Kubernetes. While we recommend you read through this documentation in its entirely at least once, where you go next is up to you:

  • The Additional Configuration section provides an overview of the most common and important options for Cesium ion, such as configuring Single sign-on, configuring an ingress and TLS, using an external PostgreSQL server, and installing Cesium Global 3D Content.

  • The Application architecture section provides an introduction to the overall system architecture, services, jobs, and other important information you should become familiar with.

  • The REST API reference documentation provides information on building clients that integrate directly with Cesium ion.

  • The Advanced Topics section describes how to use the tiling pipeline and asset server container images without Kubernetes, Cesium ion’s data management, user interface, or REST API. This includes instructions for running the tilers from the command line.

Additional configuration

Ingress and TLS

The default configuration for Cesium ion provides for IP-based access over HTTP/1.1. While this is acceptable for experimentation or local development, an ingress should be configured for production use to take advantage of DNS, TLS, caching, and the improved performance provided by HTTP/2 or HTTP/3. Follow the steps below to set one up:

Enable the ingress

  1. Open cesium-ion/values.yaml.

  2. Find the ingress section and set enabled to true.

  3. Directly above the ingress section, set localNoDNS to false.

  4. If you will be using an ingress other than the default, enter it in className and update annotations if required. Using a non-default ingress is outside of the scope of this documentation so consult your ingress documentation if needed.

  5. Under the assetServer, apiServer and frontendServer sections, under service:, change type: from NodePort to ClusterIP.

Configure DNS

The default Cesium ion configuration creates three user-facing services: One each for the API, the front end, and asset serving. Each of them will require a hostname to work with the ingress. While these names can be anything, it is recommended to provide them a shared domain. In this example we will use:

  • ion.example - User interface

  • api.ion.example - REST API

  • assets.ion.example - Asset server

Decide on the host names you would like to use or use the above. They can be changed later.

  1. Open cesium-ion/values.yaml.

  2. For each of the frontendServer, apiServer and assetServer sections:

    • Find the endpoint subsection.

    • Set host to the desired name.

Access Cesium ion via DNS

For multi-node clusters a DNS server will need to be configured, which is outside the scope of this document. For local testing and development on a single machine, the hosts file can be updated to point to your ingress. Let’s do that now to validate the above configuration.

Run the below command to apply the ingress and host changes made in the previous section

microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

Then run the following command to get the IP address of the ingress

microk8s kubectl get --namespace cesium-ion ingress

The output will look similar to the following. In this case the Ingress IP is 127.0.0.1.

NAME         CLASS    HOSTS                                            ADDRESS     PORTS     AGE
cesium-ion   public   assets.ion.example,api.ion.example,ion.example   127.0.0.1   80, 443   20s

Open the hosts file in a text editor. You will need elevated permissions to edit this file:

  • On Linux this file is located in /etc/hosts

  • On Windows this file is located in <Root>\Windows\System32\drivers\etc\hosts

Add an entry for each host name with the IP address returned by the above command. For example:

127.0.0.1       ion.example
127.0.0.1       api.ion.example
127.0.0.1       assets.ion.example

Updates to hosts take effect immediately after the file is saved.

Importing a TLS certificate

While not strictly required, it is strongly recommended to configure TLS to enable support for secure communication. In addition to added security, newer protocols such as HTTP/2 and HTTP/3 provide greatly improved performance but require TLS to be enabled before an ingress can take advantage of them.

While each Cesium ion server can have its own configuration, for simplicity these instructions will use a single certificate for all three servers.

Creating the TLS certificate key pair is outside the scope of this document. Refer to your own internal processes or tools. Ensure the certificate includes the DNS host names you configured in the previous section. Wildcard certificates can also be used.

Once you have the certificate, follow the steps below:

  1. Create a file named certificate.yaml with the below content. Replace the public and private keys with the values from your certificate.

    apiVersion: v1
    kind: Secret
    metadata:
      name: cesiumion-tls-secret
    stringData:
      tls.crt: |
        -----BEGIN CERTIFICATE-----
        [INSERT YOUR PUBLIC CERTIFICATE]
        -----END CERTIFICATE-----
      tls.key: |
        -----BEGIN PRIVATE KEY-----
        [INSERT YOUR PRIVATE KEY HERE]
        -----END PRIVATE KEY-----
    type: kubernetes.io/tls
  2. Install the new secret into your Kubernetes cluster by running the below command. The new secret will be named cesiumion-tls-secret with keys of tls.crt and with keys of tls.key. Remember to use the same namespace you used when installing Cesium ion.

    microk8s kubectl create --namespace cesium-ion -f certificate.yaml
  3. Open cesium-ion/values.yaml.

  4. For each of the frontendServer, apiServer and assetServer sections:

    • Find the endpoint subsection.

    • Set tls to true.

    • Set tlsSecretName to cesiumion-tls-secret

If you prefer to use multiple certificates, repeat the above process multiple times with a different certificate and secret name for each server.

Upgrade the application and verify the configuration

Once the ingress, DNS, and TLS are configured, upgrade the application by running:

microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
  • Linux

  • Windows

Navigate to the configuration application URL, for example https://ion.example. The Cesium ion user interface should load and work the same as before. If you encounter an issue, review the above steps and try again.

In order to expose the kubernetes ingress running on your WSL2 VM, you will need to port forward the ingress pod to the host. To perform this action, you will need to have root privileges so that the network binding from kubectl works.

Run the following command to get the name of the ingress pod:

microk8s kubectl get pods -n ingress

Now expose the kubernetes pod with port-forwarding to the windows host, remember to replace ingress-pod-name with the name from above:

sudo microk8s kubectl -n ingress port-forward pod/ingress-pod-name --address 127.0.0.1 80:80 443:443

You should see output similar to the following:

Forwarding from 127.0.0.1:80 -> 80
Forwarding from 127.0.0.1:443 -> 443

Navigate to the configuration application URL, for example https://ion.example. The Cesium ion user interface should load and work the same as before. If you encounter an issue, review the above steps and try again.

The port-forward command does not exit until you press CTRL-C. Exiting the port-forward command also means ion will no longer be accessible from your Windows desktop. If you plan on doing Windows-based development, it is recommended you have this process run in the background on start-up.

Single sign-on (SSO)

The default configuration for Cesium ion does not include authentication and all users share a single account. To allow users to each have their own account, Cesium ion can integrate with your existing identity provider (IdP) to support SSO via SAML authentication. This can be accomplished by following the steps below:

  1. From within your identity provider, configure a new SAML application. This process will vary depending on your IdP. Cesium ion requires the login URL, entity ID, and SAML certificate.

  2. Cesium ion also has an Administrator user interface for configuring shared assets and application defaults. Access is granted by specifying a specific IdP attribute name and expected value that signals administrator access. For example if configuring a Google Workspace the attribute would be "Groups" and the value would be the name of the Google Workspace group you created for ion administrators. Administrator does not provide access to other user’s data and is only for configuring shared assets and defaults.

  3. Create a file named samlSecret.yaml and add a secret resource with the following contents, replacing the certificate body with your SAML certificate.

    apiVersion: v1
    kind: Secret
    metadata:
      name: saml-secret
    stringData:
      saml.pem: |
        -----BEGIN CERTIFICATE-----
        [INSERT YOUR SAML CERTIFICATE]
        -----END CERTIFICATE-----
  4. Install the new secret into your Kubernetes cluster by running the below command. The new secret will be named saml-secret with a key of saml.pem. Remember to use the same namespace you used when installing Cesium ion.

    microk8s kubectl create --namespace cesium-ion -f samlSecret.yaml
  5. Open cesium-ion/values.yaml, find the authenticationMode property and change it to saml

  6. In the saml section update the certificateSecret, loginRequestUrl, entityId, nameIdFormat, adminAttributeName and adminAttributeValue fields to match those configured in step 1. The saml section should look similar to the following

      authenticationMode: saml
    
      # If authenticationMode=saml, these are required
      saml:
        # This secret must be created outside the cesium-ion chart
        # It should contain the SAML certificate for your identity provider
        certificateSecret:
          name: "saml-secret"
          key: "saml.pem"
        # The SAML URL for your identity provider
        loginRequestUrl: "https://login.for.your.provider.com/"
        # The entity ID that was configured in your identity provider when setting up the SAML application
        entityId: "your-entity-id"
        # The name ID format to use. Valid values are email, persistent or unspecified.
        nameIdFormat: "persistent"
        # Access to the Cesium ion Administrator user interface is granted to any
        # identity that matches the below name and value criteria
        # The attribute name to look up. Examples: "roles", "Groups"
        adminAttributeName: "Groups"
        # The value that is expected to be found for adminAttributeName.
        # This value is treated as a semicolon delimited list.
        # Examples: "admin", "my-admin-group", "users; admin; members"
        adminAttributeValue: "ion-administrators"
  7. Save your changes to cesium-ion/values.yaml

  8. Upgrade the application by running:

    microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
  9. When you navigate to the Cesium ion user interface, you will be redirected to your identity provider for authentication before being granted access. If you encounter an issue, review the above steps and try again.

Data uploaded while SSO is disabled is unavailable once SSO is enabled. Similarly, any data uploaded while using SSO becomes unavailable when SSO is disabled.

Default and shared assets

Cesium ion does not contain any data by default. For example, creating a new story with Cesium Stories will show the Earth as a bright blue WGS84 ellipsoid without terrain or imagery:

Cesium Stories without default assets
Figure 3. Cesium Stories without default assets

You can configure assets, such as Cesium 3D Global Content, so they are available to all users and optionally used by Stories for defaults. If you don’t have any global data, we’ve included two sample datasets as part of the installation zip:

  • Blue Marble - A public domain 500m resolution global imagery layer from NASA

  • GTOPO30 - A public domain 30 arcsecond resolution global terrain layer from USGS

These instructions will help you load these or any other data sets into Cesium ion. Steps vary slightly based on whether you are using Single sign-on, so be sure to choose the tab that matches your configuration.

  • SSO enabled

  • SSO disabled

When using Single sign-on, only users identified as Cesium ion administrators can upload or modify default or shared assets. You can verify that you are in this group by clicking on your username in the upper right and confirming the presence of the Administration option. When using SSO, the Asset Depot is also the only way to make data available to all users.

  1. Click on your username in the upper right and select Administration.

  2. Click Add Asset

  3. Using your preferred file manager, open the documentation/sampleData/ included with the Cesium ion release zip

  4. Drag and Drop BlueMarble.tif onto this page

  5. Optionally change the name (this can be changed later)

  6. Enter a description, this will be visible to all users (this can be changed later)

  7. Toggle Default so that it is enabled

  8. Select Imagery under What kind of data is this?

  9. Click Upload

  10. Click Add Asset again

  11. Drag and Drop GTOPO30.tif on this page

  12. Optionally change the name (this can be changed later)

  13. Enter a description, this will be visible to all users (this can be changed later)

  14. Toggle Default so that it is enabled

  15. Select Terrain under What kind of data is this?

  16. Leave everything else as the default and click Upload

  17. You can monitor the progress of each asset from this page while they tile. The process will take about ~12 minutes using the default Cesium ion tiling job allocations. Once both assets complete tiling continue to the next step

  18. Click App Settings along the top navigation menu

  19. Click Story default imagery and select Blue Marble

  20. Click Story default terrain and select GTOPO30

  21. Click Asset viewer imagery and select Blue Marble

Every user that logs into Cesium ion will now have the same defaults for Cesium Stories and the asset preview window. Any Asset Depot assets marked as "Default" will automatically be added to the user’s My Assets page the first time they log in.

When not using Single Sign-on, data is shared among all users via the My Assets page and any user can upload and change any asset.

  1. Navigate to the My Assets page

  2. Using your preferred file manager, open the documentation/sampleData/ included with the Cesium ion release zip

  3. Drag and Drop BlueMarble.tif anywhere onto the Cesium ion application, which will bring you to the Add Data page

  4. Optionally change the name (this can be changed later)

  5. Select Imagery from the drop down

  6. Click Upload

  7. Drag and Drop GTOPO30.tif anywhere onto the Cesium ion application.

  8. Optionally change the name (this can be changed later)

  9. Select Terrain from the drop down

  10. Leave other options as the default and click Upload

  11. You will now be back on the My Assets page. You can click on either BlueMarble or GTOPO30 to monitor the tiling progress of each asset. GTOPO30 takes the longest, about ~12 minutes, using the default Cesium ion tiling job allocations. Once both assets complete continue to the next step

  12. Click App Settings along the top navigation menu

  13. Click Story default imagery and select BlueMarble

  14. Click Story default terrain and select GTOPO30

  15. Click Asset viewer imagery and select BlueMarble

After performing the above steps, creating a Cesium Story will now use the configured default assets:

Cesium Stories with the included Blue Marble imagery and GTOPO terrain
Figure 4. Cesium Stories with the included Blue Marble imagery and GTOPO terrain

Additionally, when previewing other assets on the My Assets page, Blue Marble will be used as the default base layer.

You may have noticed we did not select anything for the Story default buildings setting. This option is meant exclusively for the Cesium OSM Buildings tileset, available for purchase separately.

Cesium 3D Global Content

In most cases, 3D Tiles tilesets created outside of Cesium ion or purchased from a third-party can be imported into Cesium ion using the same upload process used as untiled source data. However, when dealing with large existing tilesets spanning hundreds of gigabytes or terabytes, such as Cesium 3D Global Content, it is not ideal to upload these through the user interface.

Instead, you can place the data in a volume accessible to the Cesium ion installation and tell Cesium ion about them. This is accomplished by running the Cesium provided importData job template. Cesium ion will not make an additional copy of the data, it will serve it from the provided location.

For this guide, we will use the default assets-volume volume mount we already created as part of the Volume Configuration section of Getting Started.

Create a subdirectory under the assets-volume location, for example imported. And then follow instructions for the relevant data below.

Installing Cesium World Terrain

  1. Copy the Cesium World Terrain file to the imported directory you created above. For example imported/cesium_world_terrain_v1.2.terraindb. You can rename the file if desired. It will not be visible elsewhere in the system.

  2. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-world-terrain.yaml
  3. Open import-cesium-world-terrain.yaml in a text editor.

  4. Update metadata.name to be a unique string.

  5. Update spec.template.spec.command to bin/install-cesium-world-terrain

  6. Update spec.template.spec.args to the relative location of the data: imported/cesium_world_terrain_v1.2.terraindb

  7. The section should look similar to the below:

              command:
                - bin/install-cesium-world-terrain
              args:
                - imported/cesium_world_terrain_v1.2.terraindb
  8. Save import-cesium-world-terrain.yaml and run the job to import the asset

    microk8s kubectl create --namespace cesium-ion -f import-cesium-world-terrain.yaml

Installing Cesium World Bathymetry

  1. Copy the Cesium World Bathymetry file to the imported directory you created above. For example imported/cesium_world_bathymetry_v1.0.terraindb. You can rename the file if desired. It will not be visible elsewhere in the system.

  2. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-world-bathymetry.yaml
  3. Open import-cesium-world-bathymetry.yaml in a text editor.

  4. Update metadata.name to be a unique string.

  5. Update spec.template.spec.command to bin/install-cesium-world-bathymetry

  6. Update spec.template.spec.args to the relative location of the data: imported/cesium_world_bathymetry_v1.0.terraindb

  7. The section should look similar to the below:

              command:
                - bin/install-cesium-world-bathymetry
              args:
                - imported/cesium_world_bathymetry_v1.0.terraindb
  8. Save import-cesium-world-bathymetry.yaml and run the job to import the asset

    microk8s kubectl create --namespace cesium-ion -f import-cesium-world-bathymetry.yaml

Installing Sentinel-2 Imagery

  1. Copy the Sentinel-2 sqlite database to the imported directory you created above. For example imported/s2cloudless-2021_4326_v1.0.0_with_index.sqlite. You can rename the file if desired. It will not be visible elsewhere in the system.

  2. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-sentinel-2.yaml
  3. Open import-sentinel-2.yaml in a text editor.

  4. Update metadata.name to be a unique string.

  5. Update spec.template.spec.command to bin/install-sentinel-2

  6. Update spec.template.spec.args to the relative location of the data: imported/s2cloudless-2021_4326_v1.0.0_with_index.sqlite

  7. The section should look similar to the below:

              command:
                - bin/install-sentinel-2
              args:
                - imported/s2cloudless-2021_4326_v1.0.0_with_index.sqlite
  8. Save import-sentinel-2.yaml and run the job to import the asset

    microk8s kubectl create --namespace cesium-ion -f import-sentinel-2.yaml

Installing Cesium OSM Buildings

  1. Copy the Cesium OSM Buildings 3D Tiles database to the imported directory you created above. For example imported/planet-cwt-240304.3dtiles. You can rename the file if desired. It will not be visible elsewhere in the system.

  2. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-osm-buildings.yaml
  3. Open import-cesium-osm-buildings.yaml in a text editor.

  4. Update metadata.name to be a unique string.

  5. Update spec.template.spec.command to bin/install-cesium-osm-buildings

  6. Update spec.template.spec.args to the relative location of the data: imported/planet-cwt-240304.3dtiles

  7. The section should look similar to the below:

              command:
                - bin/install-cesium-osm-buildings
              args:
                - imported/planet-cwt-240304.3dtiles
  8. Save import-cesium-osm-buildings.yaml and run the job to import the asset

    microk8s kubectl create --namespace cesium-ion -f import-cesium-osm-buildings.yaml

Installing Cesium Moon Terrain

  1. Copy the Cesium Moon Terrain file to the imported directory you created above. For example imported/cesium_moon_terrain_v1.0.3dtiles. You can rename the file if desired. It will not be visible elsewhere in the system.

  2. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-moon-terrain.yaml
  3. Open import-cesium-moon-terrain.yaml in a text editor.

  4. Update metadata.name to be a unique string.

  5. Update spec.template.spec.command to bin/install-cesium-moon-terrain

  6. Update spec.template.spec.args to the relative location of the data: imported/cesium_moon_terrain_v1.0.3dtiles

  7. The section should look similar to the below:

              command:
                - bin/install-cesium-moon-terrain
              args:
                - imported/cesium_moon_terrain_v1.0.3dtiles
  8. Save import-cesium-moon-terrain.yaml and run the job to import the asset

    microk8s kubectl create --namespace cesium-ion -f import-cesium-moon-terrain.yaml

Installing other tilesets

  1. Copy the tileset you would like to deploy into the imported directory. The data must be in Cesium Terrain Database (.terraindb), 3D Tiles Database (.3dtiles), or GeoPackage Tiles (.gpkg) format. You can rename the file if desired. It will not be visible elsewhere in the system.

  2. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-tileset.yaml
  3. Open import-tileset.yaml in a text editor.

  4. Update metadata.name to be a unique string.

  5. Update spec.template.spec.command to bin/add-asset

  6. Update spec.template.spec.args to include the required command line options.

    • --location The relative location of the asset within the volume.

    • --type The type of asset. Valid options are 3DTILES, IMAGERY, TERRAIN

    • --name The name of the asset. This can be changed later from with the Cesium ion user interface

    • --description The description of the asset. This can be changed later from with the Cesium ion user interface

    • --attribution The attribution of the asset. This can be changed later from with the Cesium ion user interface

    • --is-default If specified, sets the asset as a default asset to be automatically added to every account when single sign-on is enabled.

    • --quick-add If specified, adds the asset to the Quick Add list returned by /v1/defaults. This list is used by Cesium native clients to show a list of common assets to end users.

  7. The section should look similar to the below:

              command:
                - bin/add-asset
              args:
                - "--location"
                - "imported/<FILENAME>"
                - "--type"
                - "3DTILES"
                - "--name"
                - "Required name"
                - "--description"
                - "Optional description"
                - "--attribution"
                - "Optional attribution"
                - --is-default
  8. Save import-tileset.yaml and run the job to import the asset

    microk8s kubectl create --namespace cesium-ion -f import-tileset.yaml

Verifying imported assets

When an identity provider is configured, assets are added to the Asset Depot and available to all accounts. When running without an identity provider, assets are added directly to My Assets.

If the imported asset does not show up in the Cesium ion user interface, inspect the Kubernetes logs created by each job, which should have actionable information as to what went wrong.

If you imported Cesium World Terrain, Cesium OSM Buildings, or Sentinel-2, you may want to configure them as the Cesium Stories default assets. See the next section for instructions on doing so.

External PostgreSQL configuration

The Cesium ion default configuration installs PostgreSQL through a configurable subchart packaged by Bitnami. It provides basic configuration to get up and running, but it is not configured for production use. While you can configure the included subchart yourself by referring to its official page on ArtifactHub, another option is connecting Cesium ion to a separately managed PostgreSQL server.

While setting up an external server is beyond the scope of this document, follow the below instructions to have Cesium ion use it.

  1. Create your connection string, which is the format of pg://username:password@hostname:port/databaseName. For example if your settings are:

    • username: db_user

    • password: 12345

    • hostname: staging-db

    • port: 5432

    • databaseName: cesium_ion

      then the connection string would be pg://db_user:12345@staging-db:5432/cesium_ion.

  2. Open cesium-ion/values.yaml find the connectionString section which is under the apiServer section. It should look similar to the below

      connectionString:
        value: ""
        secret:
          name: ""
          key: ""
  3. The value field allows you to place the connection string directly in the configuration file. This is useful for testing and validating your DB connection but not recommended for security reasons. Instead, let’s create a new secret with the connection string and use that instead.

  4. Create a new file called connection.yaml with the following content, replacing the connection with your own connection string.

    apiVersion: v1
    kind: Secret
    metadata:
      name: cesiumion-connectionString
    stringData:
      connection: "[CONNECTION STRING FROM STEP 1]"
    type: Opaque
  5. Run microk8s kubectl create --namespace cesium-ion -f connection.yaml to install the secret. Remember to use the same namespace you used when installing Cesium ion.

  6. Edit the cesium-ion/values.yaml connectionString section with the name and key. In this example it should look something like:

      connectionString:
        value: ""
        secret:
          name: "cesiumion-connectionString"
          key: "connection"
  7. Upgrade the application by running:

    microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

Using a different container registry

In the default configuration, we enabled the microk8s local container registry on localhost:32000. If using anything other than microk8s you will most likely need to update the configuration to point to your own container registry.

  1. Open cesium-ion/values.yaml.

  2. For each of the frontendServer, apiServer, assetServer, and tilingJob sections under image:

    • Update registry to point to your registry

    • Update repository to point to the relevant container in your registry

    • If you would like to customize the pullPolicy or tag properties, you can do so now.

Imports from and exports to S3

To allow users of your application to import items from S3, set the s3AssetImport flag in your values file in the features section to true. Assets imported from S3 don’t store their source data on one of the mounted volumes, this data is imported to the ephemeral working directory mounted on an individual tiling job. Make sure the workingDirectorySize specified in your values file under the tilingJob section is large enough to download this data.

Similarly to allow your users to export the tiled assets to a S3 bucket, set s3AssetExport flag in your values file in the features section to true. These options are turned off in the default configuration.

Run the below command to apply any changes to the S3 configuration that you may have made:

microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

Asset archiving

To allow users of your application to create archives of their data, set the fullArchives flag in your values in the features section to true. This also requires you to mount the archives volume since that is the location these are archives are stored

Run the below command to apply any changes to the archiving configuration that you may have made:

microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

Connecting to Cesium ion SaaS

If your installation of Self-Hosted is connected to the public internet you can use the following features in Self-Hosted by connecting your Cesium ion account to Self-Hosted.

  • Geocoding

  • Google Photorealistic 3D Tiles

  • Bing Imagery

Linking ion Account

Self-Hosted ion will connect to single Cesium ion account using an access token. The usage of Cesium ion features all Self-Hosted users will be counted against this account.

  1. In your Cesium ion account create an access token with the geocode and asset-read functionality.

  2. In the apiServer section of values.yaml set ionAccessToken to the value of your token from step 1.

  3. Run the following command to update your configuration:

    microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

Geocoding

Geocoding is automatically activated when you connect a Cesium ion account to a Self-Hosted instance.

Activating Google Photorealistic 3D Tiles

To activate Google Photorealistic 3D Tiles, your Self-Hosted instance must be linked to a Cesium ion account.

  1. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-google.yaml
  2. Open import-google.yaml in a text editor.

  3. Update metadata.name to be a unique string.

  4. Update spec.template.spec.command to bin/install-google.

  5. This import script does not require any args, so remove the args section from the outputted file.

  6. Save import-google.yaml and run the job to activate using Google Photorealistic 3D Tiles from Cesium ion

    microk8s kubectl create --namespace cesium-ion -f import-google.yaml

Activating Bing Imagery

To activate Bing Imagery, your Self-Hosted instance must be linked to a Cesium ion account.

  1. Generate the import data template by running:

    microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-bing.yaml
  2. Open import-bing.yaml in a text editor.

  3. Update metadata.name to be a unique string.

  4. Update spec.template.spec.command to bin/install-bing.

  5. This import script does not require any args, so remove the args section from the outputted file.

  6. Save import-bin.yaml and run the job to activate using Bing Imagery from Cesium ion

    microk8s kubectl create --namespace cesium-ion -f import-bing.yaml

Upgrading from a previous release

Upgrading from a previously release is a three step process:

  1. Review and update values.yaml

  2. Import updated images into your container registry

  3. Execute the helm upgrade command

During the upgrade there will be a new deployment of the API, front-end, and asset services as well as a schema migration of the PostgreSQL database. The process only takes a few minutes once initiated. Downgrading to a previous release is not supported and it is your responsibility to establish a backup and restore strategy for your cluster.

Upgrading from 1.x.x to 1.4.0

Update your values.yaml file

The values.yaml file in 1.4.0 contains a variety of changes to support the newly added features. To allow for a smooth upgrade and to re-use your existing values.yaml file, we recommend using the yq utility.yq can allow merging your existing values.yaml file with the one in version 1.2.0 to avoid cumbersome changes in multiple places. Execute the following command to create a merged file:

yq -n -P 'load("values-1.4.0.yaml") *? load("values-current.yaml")' > values.yaml

Make sure the newly generated values.yaml file is inside the cesium-ion directory. Also update the volume configuration for the newly required archives volume under the localPersistentVolumes section by following the Volume Configuration guide.

Import updated images into your container registry

Importing updated container images uses the same commands from the initial setup. If you are using podman or other Docker alternative, be sure to update the commands for your tooling. Remember to run commands in the top-level directory where you unpacked the zip:

docker image load --input images/cesium-ion-asset-server.tar
docker tag cesiumgs/cesium-ion-asset-server:1.4.0 localhost:32000/cesium-ion-asset-server:1.4.0
docker push localhost:32000/cesium-ion-asset-server:1.4.0

docker image load --input images/cesium-ion-tiling.tar
docker tag cesiumgs/cesium-ion-tiling:1.4.0 localhost:32000/cesium-ion-tiling:1.4.0
docker push localhost:32000/cesium-ion-tiling:1.4.0

docker image load --input images/cesium-ion.tar
docker tag cesiumgs/cesium-ion:1.4.0 localhost:32000/cesium-ion:1.4.0
docker push localhost:32000/cesium-ion:1.4.0

docker image load --input images/cesium-ion-frontend.tar
docker tag cesiumgs/cesium-ion-frontend:1.4.0 localhost:32000/cesium-ion-frontend:1.4.0
docker push localhost:32000/cesium-ion-frontend:1.4.0

docker image load --input images/cesium-ion-job-watcher.tar
docker tag cesiumgs/cesium-ion-job-watcher:1.4.0 localhost:32000/cesium-ion-job-watcher:1.4.0
docker push localhost:32000/cesium-ion-job-watcher:1.4.0

docker image load --input images/postgresql.tar
docker tag bitnami/postgresql:15.4.0-debian-11-r0 localhost:32000/postgresql:15.4.0-debian-11-r0
docker push localhost:32000/postgresql:15.4.0-debian-11-r0

Refer to Importing images section from the Getting Started for details and troubleshooting.

Execute the helm upgrade command

Once you have updated values.yaml and imported the latest container images into your registry, run the following command to perform the upgrade process:

microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion

The command will not exit until the upgrade is complete or fails. On success, the output will be similar to the initial install and contain a message that starts similar to:

Release "cesium-ion" has been upgraded. Happy Helming!
NAME: cesium-ion
LAST DEPLOYED: Sat Apr 13 09:22:50 2024
NAMESPACE: cesium-ion
STATUS: deployed
REVISION: 2

Application architecture

In Getting Started, you installed the Cesium ion Helm chart which consists of several deployments and other components you should familiarize yourself with in order to operate a production Cesium ion installation. A default configuration with ingress enabled will look similar to the below:

Cesium ion Self-Hosted reference architecture
Figure 5. Cesium ion Self-Hosted reference architecture

In the above diagram there are three deployments, one each for front end, assets, and API services. There is also a PostgreSQL database and two jobs, one for daily maintenance and another for running the tiling pipeline. Finally, four persistent volume claims are used to store all data associated with Cesium ion. Continue reading to learn more about these components.

The cesium-ion- prefix is used in the above diagram and throughout the documentation is generated from the name of the Helm application at install time. If you selected a different name when running helm install, such as ion-dev, all of the components will be prefixed as ion-dev- instead. For example: cesium-ion-frontendion-dev-frontend.

Frontend deployment

The Cesium ion user interface is a statically hosted website with minimal additional configuration options and no external storage requirements. Within the cluster, all components specific to the front end are named or prefixed with cesium-ion-frontend.

While a single node can handle reasonable workloads, standard autoscaling options are available and should be enabled in production deployments. See the Frontend Server section of values.yaml for more additional details.

Assets deployment

The Cesium ion asset server is responsible for serving and securing assets and other data created by the tiling process. It also serves additional media, such as images uploaded through Cesium Stories. Within the cluster, all components specific to the asset server are named or prefixed with cesium-ion-assets.

The asset server reads data from the /data/assets and /data/stories mount paths. In Getting Started, you configured local persistent volume claims for each of these, assets-volume and stories-volume, but for multi-node or production deployments a different volume type should be used. All Kubernetes volume types are supported.

While a single node can handle reasonable workloads, standard autoscaling options are available and should be enabled in production deployments. Asset serving performance is the most critical component of Cesium ion when it comes to streaming data to end users at scale. See the asset server section of values.yaml for additional details.

API deployment

The Cesium ion API server is a stateless server responsible for all business logic and data management. It is used by the front end user interface but also serves as the REST API server used by plugins, applications, or workflows that integrate with Cesium ion. Within the cluster, all components specific to the API server are named or prefixed with cesium-ion-api.

The API server needs both read and write access to the /data/assets and /data/stories mount paths. These need to be the same volumes mounted to the asset server, above. A third mount path, /data/sources is used to store all raw source data uploaded by end users. In Getting Started, you configured a local persistent volume claim, sources-volume, which should be customized for your specific deployment needs.

While a single node can handle reasonable workloads, standard autoscaling options are available and should be enabled in production deployments. See the asset server section of values.yaml for additional details.

Daily Maintenance

Some operations, such as permanently removing data previously tagged for delete, run as part of a Kubernetes CronJob using the same configuration as cesium-ion-api. By default, this process runs once a day and should take anywhere from a few seconds to a few minutes to complete. See the maintenance section under API Server in values.yaml for the full list of options.

Tiling template

Cesium ion uses the Kubernetes Jobs system to tile data with the 3D tiling pipeline. No pods are dedicated to the pipeline when data is not being processed. Within the cluster, all components specific to the tiling pipeline are named or prefixed with cesium-ion-tiling.

Tiling jobs needs read access to the /data/source mount path and read/write access to the /data/assets mount path. These should be the same volumes used by the asset and api servers. Tiling jobs also have a temporary scratch volume, working-directory, which is an emptyDir created locally on the node. Because of this, local Node storage performance can have a direct impact on tiling performance.

The resources section under Tiling Job in values.yaml contains reasonable default values for CPU, memory, and working directory storage used for each tiling job, but you should consider fine tuning these values based on the type of data you expect to tile with Cesium ion. In many cases you will be able to scale up or down resources to either improve performance or reduce resource cost.

PostgreSQL subchart

Cesium ion requires a PostgreSQL database. If you decide to use the included Bitnami subchart, all components specific to the database will be named or prefixed with cesium-ion-postgresql. A volume claim of the same name will be created and used.

As mentioned in the Getting Started, configuring a production PostgreSQL database is outside the scope of this documentation. See External PostgreSQL configuration for details.

Job watcher pod

The job watcher pod is a singular pod that watches for changes to the state of Cesium ion’s tiling jobs and pods in the namespace. It’s function is to catch any events that may cause a discrepancy between the API server and the status of the tiling job. For example, in a case where a job pod may be killed due to the node being out of memory, the watcher pod detects the event and communicates it to the API server since the job pod itself cannot do so.

Backup and restore

All data from Cesium ion is persisted to the four claims outlined above: cesium-ion-sources, cesium-ion-assets, cesium-ion-stories, and cesium-ion-postgresql. Additionally, a secret named cesium-ion-secrets is generated at install time and is critical to the operation of the application. It contains the signing key used for user-generated API access tokens. Changing or losing access to this secret will invalidate all API access tokens without a chance for recovery. This secret is retained if Cesium ion is uninstalled and re-used when reinstalled with the same application name and namespace but care will need to be taken if restoring from a backup.

You can retrieve the secret by running the below command:

microk8s kubectl get secret cesium-ion-secrets -n cesium-ion -o=yaml

While establishing a full data backup and recovery process for your Kubernetes cluster is outside the scope of this document, as long as the secret and volumes associated with these four claims are backed up and restored, a Cesium ion installation can be torn down and recreated without loss of data. It is your responsibility to establish a backup and restore strategy for your cluster.

Advanced use cases

While the Cesium ion configuration options detailed in the Getting Started and Additional Configuration sections cover the majority of use cases, Cesium ion’s 3D tiling pipeline and asset server were designed as scalable container images that can be used without the Cesium ion REST API and user interface. These images are stateless, do not depend on an external database, and can be used to build highly customized workflows and applications that do not rely on Kubernetes. Some examples of when you may want to use these components include:

  • Tiling data from the command line, without any additional infrastructure or servers

  • Tiling and serving data using a different container orchestrator

  • Serving data, such as Cesium 3D Global Content, without any additional infrastructure

  • Implementing workflows that comply with internal policies as to how data is accessed, stored, and managed

  • Embed tiling capabilities directly into any containerized application

How exactly you leverage the Cesium ion 3D tiling pipeline and asset server containers to meet your needs is up to you, keep reading for a detailed tutorial on using them.

Tiling and serving data without Kubernetes

The cesium-ion-asset-server and cesium-ion-tiling container images can be used to run the server and tiling processes in any OCI-compliant implementation such as Docker or Podman. This guide uses docker, but feel free to use your own tools and adjust the command lines as needed.

  1. Start by running the below commands to load both images into your default registry:

    docker load -i images/cesium-ion-asset-server.tar
    docker load -i images/cesium-ion-tiling.tar

    Note the full name and tag that is displayed in the output, for example cesiumgs/cesium-ion-asset-server:1.0.0 and cesiumgs/cesium-ion-tiling:1.0.0

  2. Create a new empty directory anywhere on your system. For this guide we’ll use ~/myTilesets

Running the asset server

We can now run the server using below the command line. You will need to replace license with the same string you added in cesium-ion/values.yaml. You can also set a global environment variable on your system and replace -e CESIUM_LICENSE=license with -e CESIUM_LICENSE to make it easier to run in the future.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -p 8070:8070 \
  -v ~/myTilesets:/tilesets \
  cesiumgs/cesium-ion-asset-server:1.0.0 \
  -d /tilesets \
  --development-mode \
  --cors

After the server starts, navigate to http://localhost:8070/ and you should see the developer landing page:

Cesium ion asset server developer landing page
Figure 6. Cesium ion asset server developer landing page

The top of the page should indicate your license is valid. If not, stop the server, double check the license string, and run again.

Here is the full explanation for each part of the command:

  • docker run --rm - tells docker to run the container and destroy it as soon as it exits.

  • -e CESIUM_LICENSE=license - specifies the Cesium ion license to use

  • -p 8070:8070 - exposes container port 8070 on the host at 8070

  • -v ~/myTilesets:/tilesets - mounts the host ~/myTilesets directory you created into the /tilesets directory inside of the container

  • cesiumgs/cesium-ion-asset-server:1.0.0 - specifies the asset server container image to run

  • -d /tilesets specifies the path inside of the container to look for tilesets

  • --development-mode - enables development mode

  • --cors - enables Cross-Origin Resource Sharing (CORS)

The landing page is only available when --development-mode is specified and also provides several quality of life features to aid developers during application development:

  • Easy access to this reference documentation

  • A basic application to browse and view tilesets

  • CORS is enabled and configured to allow any web client to connect

  • Caching is completely disabled

Leave the server running in the background, in the next section we’ll create tilesets using the 3D tiling pipeline.

In production, the asset server should always run behind an ingress, load balancer, or CDN.

Tiling data

The server we started in the previous section is serving data located in ~/myTilesets. Let’s use the sample data shipped with the Cesium ion Self-Hosted package to create a tileset.

This guide assumes you unpacked the Cesium ion release zip into ~/cesium-ion-1.0.0/ and that the sample data is located in ~/cesium-ion-1.0.0/sampleData/. Be sure to update the commands if you are using a different location.

Let’s tile a 3D reality model by running the following command. As with the asset server, you will need to replace license with your Cesium ion license string. You can also set a global environment variable on your system and replace -e CESIUM_LICENSE=license with -e CESIUM_LICENSE to make it easier to run in the future.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/Office_Park/Office_Park.obj \
  --input-type 3D_CAPTURE \
  -o /output/Office_Park \
  --output-type 3DTILES

You will see a lot of logging information output across the screen. Once the command is complete, ~/myTilesets/Office_Park will have been created and is a 3D Tiles tileset of the input data.

Here is an explanation of each part of the above command:

  • docker run --rm - run the container and destroy it as soon as it completes

  • -e CESIUM_LICENSE=license - Configured the license. You will need to replace license with the same string you added in cesium-ion/values.yaml. If you are running the tiler this way often, you can also set a global environment variable on your system and replace -e CESIUM_LICENSE=license with -e CESIUM_LICENSE

  • -v ~/myInput:/input - Mount the host ~/myInput directory on /input inside of the container

  • -v ~/myOutput/:/output - Mount the host ~/myOutput directory on /output inside of the container

  • cesiumgs/cesium-ion-tiling:1.0.0 - the tiling pipeline image to run

  • bin/runJob.js - the script that actually executes the pipeline

  • -i /input/Courtyard.tif The path to the input data from inside of the container

  • --input-type RASTER_IMAGERY - The type of input data

  • -o /output/Courtyard - The path to the output file from inside of the container

  • --output-type IMAGERY - The type of output being produced

The tiling pipeline always produces a single sqlite3 database as output, which can then be hosted by the asset server.

Go back to the asset server at http://localhost:8070/ and click View all tilesets. You should see Office_Park show up in the list:

Cesium ion asset server tileset listing
Figure 7. Cesium ion asset server tileset listing

Clicking on Office_Park will load it into a basic CesiumJS viewer.

The sample reality model tiled with Cesium ion
Figure 8. Sample reality model tiled with Cesium ion

To ensure the 3D tileset is being served out, the root tileset.json file can be previewed in a browser via the URL http://localhost:8070/v1/3dtiles/Office_Park/tileset.json. This will display the root tileset.json file.

The tileset can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/Office_Park/tileset.json url.

Let’s continue to process the remaining sample data.

Imagery

Cesium ion supports tiling of one or more raster imagery files into a tileset. To tile the Courtyard.tif sample imagery, use the below command line:

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/Courtyard.tif \
  --input-type RASTER_IMAGERY \
  -o /output/Courtyard \
  --output-type IMAGERY

Once the command completes, go back to the View all tilesets page and refresh the page. You should now see Courtyard listed in the assets. Click on it to view the data.

It should look similar to the below image:

Sample imagery tiled with Cesium ion
Figure 9. Sample imagery tiled with Cesium ion

To ensure the TileMapService (TMS) imagery tileset is being served out, the root tilemapresource.xml file can be navigated to in a browser via the URL http://localhost:8070/v1/imagery/Courtyard/tilemapresource.xml. This will download the root tilemapresource.xml file.

This is a TileMapService (TMS) imagery tileset that can be loaded into any application that supports TMS layout via the http://localhost:8070/v1/imagery/Courtyard/ url.

Terrain

Cesium ion supports tiling of one or more raster terrains into a single tileset.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/ZionNationalPark.tif \
  --input-type RASTER_TERRAIN \
  -o /output/ZionNationalPark \
  --output-type TERRAIN

Once the command completes, go back to the View all tilesets page and refresh the page. You should now see ZionNationalPark listed in the assets. Click on it to view the data.

It should look similar to the below image:

Sample terrain tiled with Cesium ion
Figure 10. Sample terrain tiled with Cesium ion

To ensure the terrain tileset is being served out, the root layer.json file can be navigated to in a browser via the URL http://localhost:8070/v1/terrain/ZionNationalPark/layer.json. This will preview the root layer.json file.

This is a quantized-mesh terrain tileset that can be loaded into any application that supports quantized-mesh via the http://localhost:8070/v1/terrain/ZionNationalPark/ url.

Point clouds

Cesium ion supports tiling of one or more LAS or LAZ point clouds into a single tileset.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/House.laz \
  --input-type POINT_CLOUD \
  -o /output/House \
  --output-type 3DTILES

Once the command completes, go back to the View all tilesets page and refresh the page. You should now see House listed in the assets. Click on it to view the data.

It should look similar to the below image:

Sample point cloud tiled with Cesium ion
Figure 11. Sample point cloud tiled with Cesium ion

This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/House/tileset.json url.

Arbitrary 3D Models

Cesium ion supports tiling of one or more glTF, DAE, FBX, or OBJ models into a single tileset. This option should be used whenever the model is not a reality model or similarly captured 3D data. One example is BIM and CAD models.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/OfficePlan/OfficePlan.obj \
  --input-type 3D_MODEL \
  -o /output/OfficePlan \
  --output-type 3DTILES

Once the command completes, go back to the View all tilesets page and refresh the page. You should now see OfficePlan listed in the assets. Click on it to view the data.

It should look similar to the below image:

Sample CAD model tiled with Cesium ion
Figure 12. Sample CAD model tiled with Cesium ion

This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/OfficePlan/tileset.json url.

CityGML

Cesium ion supports tiling of one or more CityGML files into a single tileset.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/Reichstag/Reichstag.gml \
  --input-type CITYGML \
  -o /output/Reichstag \
  --output-type 3DTILES

Once the command completes, go back to the View all tilesets page and refresh the page. You should now see Reichstag listed in the assets. Click on it to view the data.

It should look similar to the below image:

Sample CitGML model tiled with Cesium ion
Figure 13. Sample CitGML model tiled with Cesium ion

This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/Reichstag/tileset.json url.

KML/COLLADA

Cesium ion supports tiling KML/COLLADA files, which are a subset of the KML specification used for exporting building models from many tools.

docker run --rm \
  -e CESIUM_LICENSE=license \
  -v ~/cesium-ion-1.0.0/sampleData/:/input \
  -v ~/myTilesets:/output \
  cesiumgs/cesium-ion-tiling:1.0.0 \
  bin/runJob.js \
  -i /input/Office_Park_KML/doc.kml \
  --input-type KML \
  -o /output/Office_Park_KML \
  --output-type 3DTILES

Once the process completes, go back to the View all tilesets page and refresh the page. You should now see Office_Park_KML listed in the assets. Click on it to view the data.

It should look similar to the below image:

Sample KML/COLLADA model tiled with Cesium ion
Figure 14. Sample KML/COLLADA model tiled with Cesium ion

This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/Office_Park_KML/tileset.json url.

Tiling Scripts

Cesium ion Self-Hosted ships with tiling scripts that allow you to execute the above tiling functions with ease. These tiling scripts are designed to provide a clean interface on top of the existing tiling pipeline exposed through the container.

The license key can be specified one of four ways (in order of priority):

  1. passing in the license key with the --license flag

  2. passing in the path to a license file with the --license flag

  3. setting the environment variable CESIUM_LICENSE to the license key

  4. adding the license file to the same directory as the script

The container image (in [registry/]image[:tag] format) can be specified one of three ways (in order of priority):

  1. using the --image flag

  2. setting the environment variable CESIUM_TILING_IMAGE

  3. updating the default-image.txt in the scripts directory

To understand more, execute the bash script of your choice with the --help flag under the scripts folder. e.g scripts/model-tiler --help

Next steps

The Cesium ion pipeline and asset server container images are powerful building blocks that can be used to create custom pipelines that scale to handle massive and disparate 3D geospatial datasets and allow you to concentrate on the unique value your application provides.

3D tiling pipeline reference

Common options

The following options are shared across all tiling jobs.

option value required description

--input-type

RASTER_IMAGERY RASTER_TERRAIN CITYGML 3D_CAPTURE 3D_MODEL POINT_CLOUD KML

Yes

The type of source data to be tiled. Only certain combinations of input type and output type are supported, see the specific section for each type of data below for details.

--input

string

Yes

The path to one or more files to process. Globs are supported for selecting a large amount of files at once. Files must be available under the tilingJob.volumeMounts sources-volume mountPath location specified in cesium-ion/values.yaml.

Zip files are also supported and will be automatically decompressed before processing.

S3 urls in the form of s3://[bucket]/[prefix]. If S3 urls are used for input, all input values must be S3 urls in the same bucket. This uses the AWS credentials in the environment.

--output-type

IMAGERY TERRAIN 3DTILES

Yes

The type of tileset to produce. Only certain combinations of input type and output type are supported, see the specific section for each type of data below for details.

--output

string

Yes

The path to the output file. The file must be written to a path under the tilingJob.volumeMounts assets-volume mountPath location specified in cesium-ion/values.yaml.

An S3 url in the form of s3://[bucket]/[prefix] is also supported. This uses the AWS credentials in the environment.

--progress-url

string

Yes

An optional url for the tiling job to POST messages about tiling progress. See Monitoring Progress for more details.

Currently the 3D tiling pipeline supports the following source types:

Reality models

A 3D Tiles tileset can be created from one or more 3D Model files by specifying --input-type 3D_CAPTURE and --output-type 3DTILES.

The following model formats are supported:

  • Wavefront OBJ (.obj)

  • glTF (.gltf, .glb)

  • Filmbox (.fbx)

  • COLLADA (.dae)

The 3D_CAPTURE input type is meant specifically for large mesh data typically derived from point clouds or photogrammetric processes. See Arbitrary models for tiling model data that does not fit this description.

Reality model-specific command-line options:

option value default description

--geometry-compression

NONE, DRACO, MESHOPT, or QUANTIZATION

DRACO

Controls the type of compression applied to geometry when creating a 3D Tileset.

NONE is used to Disable geometry compression.

DRACO uses Draco Compression to create a smaller tileset with better streaming performance. 3D Tiles produced with this option require a that supports the KHR_draco_mesh_compression glTF extension. All official Cesium clients are supported.

MESHOPT Meshopt geometric compression is optimized for runtime performance. 3D Tiles produced with this option require a client, such as CesiumJS, that supports the EXT_meshopt_compression glTF extension.

QUANTIZATION Quantization conducts vertex quantization by storing positions as integers rather than floats. 3D Tiles produced with this option require a client, such as CesiumJS, that supports the KHR_mesh_quantization glTF extension.

--position

Array of Numbers

N/A

The origin of the tileset in [longitude, latitude, height] format in EPSG:4326 coordinates and height in meters. This value is ignored if the source data already contains georeferencing information.

--texture-format

AUTO or KTX2

KTX2

Controls the format of textures in the 3D Tiles tileset.

AUTO Automatically select between PNG or JPG on an image-by-image basis to produce the smallest tileset compatible with all 3D Tiles clients.

KTX2 KTX v2.0 is an image container format that supports Basis Universal supercompression. Use KTX2 Compression to create a smaller tileset with better streaming performance.

Arbitrary models

A 3D Tiles tileset can be created from one or more 3D Model files by specifying --input-type 3D_MODEL and --output-type 3DTILES.

The following model formats are supported:

  • Wavefront OBJ (.obj)

  • glTF (.gltf, .glb)

  • Filmbox (.fbx)

  • COLLADA (.dae)

The 3D_MODEL input type is meant for traditional 3D models, such as CAD, BIM, or other human made designs. See Reality models for tiling 3D captures or other large scale meshes.

Model-specific command-line options:

option value default description

--geometry-compression

NONE or DRACO

DRACO

Controls the type of compression applied to geometry when creating a 3D Tileset. NONE is used to Disable geometry compression. DRACO uses Draco Compression to create a smaller tileset with better streaming performance. 3D Tiles produced with this option require a that supports the KHR_draco_mesh_compression glTF extension. All official Cesium clients are supported.

--position

Array of Numbers

N/A

The origin of the tileset in [longitude, latitude, height] format in EPSG:4326 coordinates and height in meters. This value is ignored if the source data already contains georeferencing information.

--texture-format

AUTO or WEBP

AUTO

Controls the format of textures in the 3D Tiles tileset.

AUTO will automatically select between PNG or JPG on an image-by-image basis to produce the smallest tileset compatible with all 3D Tiles clients.

WebP creates smaller images for better streaming performance but requires a client that supports the EXT_texture_webp glTF extension. All official Cesium clients are supported.

Point clouds

A 3D Tiles tileset can be created from one or more point cloud files by specifying --input-type POINT_CLOUD and --output-type 3DTILES.

LASer (.las, .laz) formats are supported

Point cloud-specific command-line options:

option value default description

--geometry-compression

NONE or DRACO

DRACO

Controls the type of compression applied to geometry when creating a 3D Tileset. NONE is used to Disable geometry compression. DRACO uses Draco Compression to create a smaller tileset with better streaming performance. 3D Tiles produced with this option require a that supports the KHR_draco_mesh_compression glTF extension. All official Cesium clients are supported.

--position

Array of Numbers

N/A

The origin of the tileset in [longitude, latitude, height] format in EPSG:4326 coordinates and height in meters. This value is ignored if the source data already contains georeferencing information.

Imagery

An imagery tileset can be created by specifying --input-type RASTER_IMAGERY and --output-type IMAGERY.

The following formats are supported:

  • GeoTIFF (.tiff, .tif)

  • Floating Point Raster (.flt)

  • Arc/Info ASCII Grid (.asc)

  • Source Map (.src)

  • Erdas Imagine (.img)

  • USGS ASCII DEM and CDED (.dem)

  • JPEG (.jpg, .jpeg)

  • PNG (.png)

  • DTED (.dt0, .dt1, .dt2)

Rasters must be orthorectified and contain a coordinate reference system. Sidecar files such as .aux.xml, .tab, .tfw, .wld, .prj, .ovr, .rrd, etc…​ will be automatically detected and used.

There are no imagery-specific command-line options.

Terrain

A terrain tileset can be created by specifying --input-type RASTER_TERRAIN and --output-type TERRAIN.

The following formats are supported:

  • GeoTIFF (.tiff, .tif)

  • Floating Point Raster (.flt)

  • Arc/Info ASCII Grid (.asc)

  • Source Map (.src)

  • Erdas Imagine (.img)

  • USGS ASCII DEM and CDED (.dem)

  • JPEG (.jpg, .jpeg)

  • PNG (.png)

  • DTED (.dt0, .dt1, .dt2)

Rasters must be single band floating point or integer values. They must also be orthorectified and contain a coordinate reference system. Sidecar files such as .aux.xml, .tab, .tfw, .wld, .prj, .ovr, .rrd, etc…​ will be automatically detected and used.

Terrain-specific command-line options:

option value default description

--height-reference

MEAN_SEA_LEVEL or WGS84

N/A

By default, the source data’s vertical datum is used as the base height which elevation values are relative to. Specifying this property will override that behavior. If not specified and no vertical datum is available, WGS84 is used.

Set to WGS84 to force use of the WSG84 ellipsoid or MEAN_SEA_LEVEL to force use of the EGM96 MSL model.

--to-meters

number

N/A

By default, the source data’s vertical datum is used. When specified, overrides the units of the vertical axis and provides the constant scale factor to apply to input elevation values to convert them to meters. Setting this property is only useful in the rare case that the vertical axis has different units than the horizontal axis.

For example, if the data is in feet and no vertical datum is specified, 0.3048 should be specified to convert from feet to meters.

--water-mask

boolean

false

Setting this value to true will treat nodata or elevation values at sea level as water and adds a water mask extension to the tileset. Typically, this value is only used when tiling global data.

--base-terrain

string

N/A

By default, any area of the earth not covered by the provided terrain will have an elevation of mean sea level. By specifying the path of an existing terrain tileset, the new terrain will be placed on top of the referenced terrain to create a new derived dataset. Void values in the source terrain will also be blended with the existing underlying terrain data.

CityGML

A 3D Tiles tileset can be created from one or more CityGML files (.citygml, .xml, .gml) by specifying --input-type CITYGML and --output-type 3DTILES. CityGML 3.0 is not yet supported.

CityGML-specific command-line options:

option value default description

--geometry-compression

NONE or DRACO

DRACO

Controls the type of compression applied to geometry when creating a 3D Tileset. NONE is used to Disable geometry compression. DRACO uses Draco Compression to create a smaller tileset with better streaming performance. 3D Tiles produced with this option require a that supports the KHR_draco_mesh_compression glTF extension. All official Cesium clients are supported.

--disable-colors

boolean

false

When set to true, the tiler ignores color information and creates all white geometry.

--disable-textures

boolean

false

When set to true, the tiler ignores texture information and uses the underlying geometry color.

--clamp-to-terrain

string

The path to terrain tileset to use when clamping data, such as Cesium World Terrain. If specified without an argument, mean sea level (EGM96) will be used. When terrain clamping is enabled, CityGML will adjust the height of the following object types so that they lay flat on the terrain: CityFurniture, GroundSurface, Track, Road, Railway, Square, ReliefFeature, LandUse and TransportationObject.

KML/COLLADA

A 3D Tiles tileset can be created from one or more KML files (.kml, .kmz) with associated COLLADA (.dae) models by specifying --input-type KML and --output-type 3DTILES.

KML tiling does not support the full KML specification. It will process Model elements inside of an associated Placemark. Any Placemark metadata will also be included in the 3D Tiles output.

KML-specific command-line options:

option value default description

--geometry-compression

NONE or DRACO

DRACO

Controls the type of compression applied to geometry when creating a 3D Tileset. NONE is used to Disable geometry compression. DRACO uses Draco Compression to create a smaller tileset with better streaming performance. 3D Tiles produced with this option require a that supports the KHR_draco_mesh_compression glTF extension. All official Cesium clients are supported.

--clamp-to-terrain

string

The path to terrain tileset to use when clamping data, such as Cesium World Terrain. By default, mean sea level (EGM96) will be used. When terrain clamping is enabled, the height of the models will be adjusted to lay flat on the terrain.

Monitoring Progress

When specifying a --progress-url parameter, the tiling pipeline will POST progress updates to the provided URL as a JSON object with the following shape and properties:

{
  jobId,
  progress: {
    percentComplete,
    status,
    message,
    errorJson,
  }
}
property type description

jobId

string

A unique identifier for the job

progress.percentComplete

number or undefined

A numeric value from 0 to 100 or undefined in the event of an error

progress.status

string

One of IN_PROGRESS COMPLETE ERROR DATA_ERROR DATA_ERROR

progress.message

string or undefined

When status is ERROR or DATA_ERROR contains a human readable description of the error

progress.errorJson

string or undefined

When status is ERROR, includes any additional information that may be helpful in debugging the issue

The --progress-url option will preserve query parameters so that you can encode other data you need into the URL, such as a database identifier associated with this data or API token.

Asset server reference

Installing Global 3D Content

The Cesium ion asset server can host Cesium’s curated Global 3D Content such as Cesium World Terrain, OSM Buildings, and Sentinel-2 imagery. Copy the relevant file into the directory being served by the asset server. You can rename the file to whatever you would like and the name will be the tileset identifier. The asset server will auto-detect the database type and host it under the correct route.

For example, if you rename Cesium World Terrain to cwt, it will be available at /v1/terrain/cwt/ and you can browse the layer.json file by visiting /v1/terrain/cwt/layer.json. You can also use Cesium World Terrain with the --clamp-to-terrain option of the 3D tiling pipeline.

Deploying to production

If you plan on running the asset server in a production environment, make sure your ingress, load balancer, or other content distribution mechanism sets appropriate cache headers for your use case. Because a content caching policy is highly dependent on specific use cases, the asset server does not set any cache headers when in production mode. This means the default behavior is to cache forever, which is probably not what you want.

Log levels

The levels on the logs produced by the API server are in increments of 10. The following mapping shows the levels and what they represent:

10: TRACE,
20: DEBUG,
30: INFO,
40: WARN,
50: ERROR,
60: FATAL

API Reference

If you are using official Cesium clients, there is typically no need for you to interact directly with the API routes created by the server. However if you are implementing custom processes that rely on 3D tiling pipeline output or implementing your own client, refer to the below documentation for retrieving individual tile data.

3D Tiles

Cesium ion serves 3D Tilesets that conforms to the 3D Tiles specification.

GET /v1/3dtiles/{tilesetId}/tileset.json

Retrieves the root tileset JSON for tilesetId.

Path Parameters
Name Description

tilesetId

The tileset identifier on which to perform the operation. This is a url-encoded file path relative to assetServer.volumeMounts.mountPath

Path Responses
Code Content-Type Description

200 OK

application/json

The contents of the tileset.json

404 Not Found

application/json

A JSON object of the format: {"code":"ResourceNotFound","message":""}

GET /v1/3dtiles/{tilesetId}/{pathFragment}

Retrieves 3D Tiles content from the tileset.

Path Parameters
Name Description

tilesetId

The tileset identifier on which to perform the operation. This is a url-encoded file path relative to assetServer.volumeMounts.mountPath

pathFragment

The path into the 3D tileset. This is a path fragment and not a single identifier.

Path Responses
Code Content-Type Description

200 OK

Varies based on content type of the 3D Tiles resource

The contents of the file

404 Not Found

application/json

A JSON object of the format: {"code":"ResourceNotFound","message":""}

Imagery

Cesium ion serves imagery that conforms to the TileMap Service specification (TMS).

GET /v1/imagery/{tilesetId}/tilemapresource.xml

Retrieves the TileMap resource associated with the provided tileset

Path Parameters
Name Description

tilesetId

The tileset identifier on which to perform the operation. This is a url-encoded file path relative to assetServer.volumeMounts.mountPath

Path Responses
Code Content-Type Description

200 OK

application/xml

The TileMap resource for this tileset

404 Not Found

application/json

A JSON object of the format: {"code":"ResourceNotFound","message":""}

GET /v1/imagery/{tilesetId}/{zoomLevel}/{tileColumn}/{tileRow}.(jpg|png)

Retrieves the TileMap resource associated with the provided tileset

Path Parameters
Name Description

tilesetId

The tileset identifier on which to perform the operation. This is a url-encoded file path relative to assetServer.volumeMounts.mountPath

zoomLevel

The zoom level

tileColumn

The "x" tile coordinate.

tileRow

The "y" tile coordinate.

Path Responses
Code Content-Type Description

200 OK

image/jpg, image/png

The image at the provided coordinates

404 Not Found

application/json

A JSON object of the format: {"code":"ResourceNotFound","message":""} Note that for transparent imagery, Cesium ion does not store empty tiles. Therefore a 404 error may happen during normal operation.

Terrain

Cesium ion serves terrain in the quantized-mesh-1.0 terrain format.

GET /v1/terrain/{tilesetId}/layer.json

Retrieves the layer JSON for tilesetId.

Path Parameters
Name Description

tilesetId

The tileset identifier on which to perform the operation. This is a url-encoded file path relative to assetServer.volumeMounts.mountPath

Path Responses
Code Content-Type Description

200 OK

application/json

The contents of the layer.json

404 Not Found

application/json

A JSON object of the format: {"code":"ResourceNotFound","message":""}

GET /v1/terrain/{tilesetId}/{zoomLevel}/{tileColumn}/{tileRow}.terrain

Retrieves a quantized-mesh terrain tile

Path Parameters
Name Description

tilesetId

The tileset identifier on which to perform the operation. This is a url-encoded file path relative to assetServer.volumeMounts.mountPath

zoomLevel

The zoom level

tileColumn

The "x" tile coordinate.

tileRow

The "y" tile coordinate.

Path Responses
Code Content-Type Description

200 OK

application/vnd.quantized-mesh

The terrain tile and optional extensions.

404 Not Found

application/json

A JSON object of the format: {"code":"ResourceNotFound","message":""}

Health check
GET /health

The health check can be used to ensure the server is running. This route always returns 204 No Content if the server is reachable.

Third-party software

Cesium ion makes use of the following third-party software.

Name

License

@aws-sdk/client-batch

Apache-2.0

@aws-sdk/client-cloudwatch-logs

Apache-2.0

@aws-sdk/client-ec2

Apache-2.0

@aws-sdk/client-lambda

Apache-2.0

@aws-sdk/client-s3

Apache-2.0

@aws-sdk/client-sts

Apache-2.0

@aws-sdk/credential-provider-node

Apache-2.0

@aws-sdk/lib-storage

Apache-2.0

@aws-sdk/s3-presigned-post

Apache-2.0

@aws-sdk/s3-request-presigner

Apache-2.0

@aws-sdk/util-retry

Apache-2.0

@fortawesome/fontawesome-svg-core

MIT

@fortawesome/free-brands-svg-icons

(CC-BY-4.0 AND MIT)

@fortawesome/free-regular-svg-icons

(CC-BY-4.0 AND MIT)

@fortawesome/free-solid-svg-icons

(CC-BY-4.0 AND MIT)

@fortawesome/react-fontawesome

MIT

@gltf-transform/core

MIT

@gltf-transform/extensions

MIT

@gltf-transform/functions

MIT

@hubspot/api-client

ISC

@kubernetes/client-node

Apache-2.0

@node-saml/node-saml

MIT

@opensearch-project/opensearch

Apache-2.0

@polymer/iron-input

BSD-3-Clause

@polymer/paper-slider

BSD-3-Clause

@recurly/react-recurly

MIT

@slack/web-api

MIT

@webcomponents/webcomponentsjs

BSD-3-Clause

@zip.js/zip.js

BSD-3-Clause

@zxcvbn-ts/core

MIT

@zxcvbn-ts/language-common

MIT

@zxcvbn-ts/language-en

MIT

archiver

MIT

aws-c-auth

Apache-2.0

aws-c-cal

Apache-2.0

aws-c-common

Apache-2.0

aws-c-compression

Apache-2.0

aws-c-event-stream

Apache-2.0

aws-c-http

Apache-2.0

aws-c-io

Apache-2.0

aws-c-mqtt

Apache-2.0

aws-c-s3

Apache-2.0

aws-c-sdkutils

Apache-2.0

aws-checksums

Apache-2.0

aws-crt-cpp

Apache-2.0

aws-sdk

Apache-2.0

aws-sdk-cpp

Apache-2.0

aws4

MIT

base64

BSD-2-Clause

bcrypt

MIT

better-sqlite3

MIT

bluebird

MIT

boost

BSL-1.0

bootstrap

MIT

bootstrap-datetimepicker

MIT

bootstrap-sass

MIT

bshoshany-thread-pool

MIT

bzip2

bzip2

c3

MIT

cesium

Apache-2.0

cesium-native

Apache-2.0 License

chokidar

MIT

chroma-js

(BSD-3-Clause AND Apache-2.0)

clipboard

MIT

clone

MIT

collada-dom

MIT

COLLADA2GLTF

BSD-3-Clause

cookie

MIT

corejs-typeahead

MIT

country-code-lookup

MIT

cpp-jwt

MIT

csv-parse

MIT

cxxopts

MIT

d3

BSD-3-Clause

d3-array

BSD-3-Clause

d3-axis

BSD-3-Clause

d3-interpolate

BSD-3-Clause

d3-scale

BSD-3-Clause

d3-scale-chromatic

BSD-3-Clause

d3-selection

BSD-3-Clause

d3-shape

BSD-3-Clause

d3-time-format

BSD-3-Clause

data-uri-to-buffer

MIT

date-fns

MIT

date-fns-tz

MIT

David Eberly’s Geometric Tools

BSL-1.0

deep-equal

MIT

delaunator-cpp

MIT

dockerode

Apache-2.0

dompurify

(MPL-2.0 OR Apache-2.0)

double-conversion

BSD-3-Clause

draco

Apache-2.0

draco3d

Apache-2.0

draco3dgltf

Apache-2.0

durandal

MIT

eigen

MPL-2.0

email-templates

MIT

escape-html

MIT

expat

MIT

fast-glob

MIT

FBX2glTF

BSD-3-Clause

flatbuffers

Apache-2.0

flatpickr

MIT

fmt

MIT

fontawesome/css

MIT

fontawesome/fonts

SIL OPEN FONT LICENSE Version 1.1

fs-extra

MIT

gdal

MIT

geoip-lite

Apache-2.0

giflib

MIT

glm

MIT

globby

MIT

gltf-pipeline

Apache-2.0

glutess

SGI FREE SOFTWARE LICENSE B v2.0

handlebars

MIT

highlight.js

BSD-3-Clause

i18next

MIT

iconv-lite

MIT

image-size

MIT

ImageMagick

Apache-2.0

Intel ® Architecture Instruction Set Extensions and Future Features

MIT

jimp

MIT

jquery

MIT

jsdom

MIT

json-c

MIT

jsonwebtoken

MIT

klaw

MIT

knex

MIT

laszip

LGPL-2.1

Lato

SIL OPEN FONT LICENSE Version 1.1

lerc

Apache-2.0

libcitygml

LGPL-v2.1

libcurl

MIT

libdeflate

MIT

libgeotiff

MIT

libjpeg-turbo

IJG,BSD-3-Clause,Zlib

libmorton

MIT

libpng

libpng-2.0

libtiff

MIT

libwebp

BSD-3-Clause

libxml2

MIT

lit

BSD-3-Clause

lru-cache

ISC

meshoptimizer

MIT

mime

MIT

moment

MIT

morton-nd

MIT

nan

MIT

nconf

MIT

nlohmann_json

MIT

node-expat

MIT

obj2gltf

Apache-2.0

object-hash

MIT

Open Sans

SIL OPEN FONT LICENSE Version 1.1

openssl

OpenSSL

otpauth

MIT

pako

(MIT AND Zlib)

pcre

BSD-3-Clause

pepjs

MIT

pg

MIT

pino

MIT

piscina

MIT

prismjs

MIT

probe-image-size

MIT

progress

MIT

proj

MIT

PROJ-data

CC-BY-4.0,CC-BY-SA-4.0,CC0-1.0,Data licence Germany - attribution - version 2.0,OGL-Canada-2.0,Open License France,Public domain,BSD-2-Clause

pwa-helpers

BSD-3-Clause

qhull

Qhull

qrcode.react

ISC

quill

BSD-3-Clause

quill-delta-to-html

ISC

rapidjson

MIT

Ray Tracing Gems II source code

MIT

react

MIT

react-bootstrap

MIT

react-dom

MIT

react-hook-form

MIT

react-i18next

MIT

readable-stream

MIT

recurly

MIT

redoc

MIT

redux

MIT

redux-thunk

MIT

request

Apache-2.0

request-promise-native

ISC

requirejs

MIT

requirejs-text

MIT

restify

MIT

restify-errors

MIT

s2n

Apache-2.0

select

MIT

showdown

MIT

sqlite3

Unlicense

stb

Unlicense

svg-captcha

MIT

taskflow

MIT

terser

BSD-2-Clause

tinyobjloader

MIT

toastr

MIT

trompeloeil

BSL-1.0

ua-parser-js

MIT

upath

MIT

urijs

MIT

workerpool

Apache-2.0

xatlas

MIT

xerces-c

Apache-2.0

xml-stream

MIT

xml2js

MIT

xsimd

BSD-3-Clause

xz_utils

Unlicense

yaml

ISC

yargs

MIT

zlib

Zlib

zstd

BSD-3-Clause

zstr

MIT

Changelog

1.4.0 2024-10-03

Additions 🎉

  • Exposed the input-crs option on the point cloud tiler and the reality tiler which can be used to provide or override a CRS embedded in the data.

  • Added ability to duplicate entire Stories. When viewing the Stories list, click Duplicate on a story tile.

  • Added ability to duplicate individual story slides. In a story, right click on the slide thumbnail to access the Duplicate slide menu option.

  • Added ability to link to individual story slides.

  • Reality tiler performance improvements.

Fixes 🔧

  • Minor bug fixes

1.3.0 2024-08-02

Additions 🎉

  • Added the ability to add Google Photorealistic 3D Tiles and Bing imagery assets to ion Self-Hosted by connecting an ion SaaS account to Self-Hosted.

  • Connecting an ion SaaS account to Self-Hosted also adds geocoding to the user interfaces for Clips, Stories, and the Location Editor for My Assets. Geocoding is accessible through the ion Self-Hosted API for geocoding features in solutions you develop.

  • Added bash scripts for running the tiling pipeline via command line as an alternative to running it in Docker.

  • In the tiling command line for 3D_CAPTURE and 3D_MODEL source data, exposed the input-up-axis option. When set, it overrides the model’s default up axis and treats the given axis as up.

  • Added the ability to create Stories for the moon if Cesium Moon Terrain (CMT) asset is available. Contact Cesium for CMT licensing details.

  • The Reality Tiler was updated with significant performance improvements for large models.

Fixes 🔧

  • Fixed multiple issues with job management. Under certain conditions these issues caused tiling jobs not to run, caused failed jobs to not appear as failed in the ion user interface, and prevented failed jobs from being canceled after the asset was removed.

1.2.2 2024-07-15

  • Security updates

1.2.1 2024-07-09

Fixes 🔧

  • Security updates

  • Minor bug fixes

1.2.0 2024-06-10

Additions 🎉

  • Added the ability Clips support to the frontend application and API.

  • Added to download tiled assets as a zip file. This must be enabled under the features section of values.yaml.

  • Added the ability to import source data from S3 and export tiled asset data to S3. This must be enabled under the features section of values.yaml.

  • The buildings tiler for KML and CITYGML data provides improved performance, best effort tiling and improved logging.

Fixes 🔧

  • Fixed an issue with statically hosted assets (GLTF, CZML, KML & GEOJSON) where they were not being deleted by the maintenance script.

  • Fixed uploading 3D Tiles as a pile of files.

  • The size of the cesium-ion-tiling image has been reduced by ~0.5 GB.

  • Fixed the response mime type hosted imagery.

  • Fix draco compression bug causing failures in the buildings tiler.

1.1.1 2024-04-23

This is a patch release to fix the inability to tile data in Kubernetes environments without a full cgroup implementation, such as some Windows Subsystem for Linux (WSL) configurations. See Upgrading from a previous release for instructions on the upgrade process.

1.1.0 2024-04-15

This is a maintenance release with many minor bugfixes and improvements for scalability and performance. The most notable items include:

Breaking Changes 📣

  • The assetServer health check now returns a status of 200 instead of 204 to be compatible with ingresses that require it, such as Google Cloud Platform. If you configured an exact check for 204 in your ingress, you must update it to look for 200 instead.

Additions 🎉

  • Added a strategy section to assetServer, apiServer and frontendServer sections to select between Recreate and RollingUpdate deployment strategies. The Recreate deployment strategy is used by default to simplify and reduce required resources for local development.

  • Added activeDeadlineSeconds to tilingJob section of values.yaml. Jobs will be killed if they run longer than the number of seconds provided by this value. The default value is no timeout.

  • Added documentation for installing Cesium World Bathymetry

  • Point cloud tiling now supports and preserves all point record format field data. See our blog post for more details.

  • Improved performance and reduced memory footprint for tiling certain classes of reality models.

Fixes 🔧

  • Fixed a bug where ion would indicate that a tiling job was done before it was actually complete.

  • Imagery tiling now handles partially corrupted GeoTIFFs and will log a warning instead of failing when they are encountered.

  • Tiling KML buildings will no longer fail when encountering a missing or invalid model and will instead log a warning to the console.

  • Fixed an issue when tiling reality models where floating point conversion could cause the tiler to abort operation.

  • Fixed an issue when tiling reality models where missing texture files could cause the tiler to abort operation.

  • Fixed an issue when tiling reality models that caused occasional texture artifacts.

1.0.0 2023-12-12

  • Initial release