Getting started
This step-by-step guide will help you configure and deploy Cesium ion on Kubernetes. While we expect you to already be experienced with running Kubernetes workloads, this guide is written to take you from zero to Cesium ion with minimal prior knowledge. If you’re updating an existing installation, see Upgrading from a previous release.
What’s included
-
startHere.html - Documentation (this file)
-
cesium-ion/ - Helm chart
-
images/cesium-ion.tar - Container image for the Cesium ion API
-
images/cesium-ion-asset-server.tar - Container image for the Cesium ion asset server
-
images/cesium-ion-frontend.tar - Container image for the Cesium ion user interface
-
images/cesium-ion-tiling.tar - Container image for the Cesium ion 3D tiling pipeline
-
images/cesium-ion-job-watcher.tar - Container image for the job watcher to detect abnormally terminated jobs
-
images/postgresql.tar - Container image for PostgreSQL
-
scripts/ - Scripts that allow you to run the tiling pipeline from the command line.
-
restApi.html - REST API reference
-
sampleData/ - Sample data used throughout the documentation
-
thirdParty/ - Additional files used by the documentation
System requirements
Any x86-64 compatible system capable of running Kubernetes can also run Cesium ion. Requirements for production workloads will depend directly on your use case and Cesium ion performance will scale with additional CPU, RAM, and storage performance. For local development, we recommend a computer with the following minimum properties:
-
An x86-64 compatible processor
-
8 or more CPU cores (see Configuring Resources)
-
32GB of RAM or greater.
-
At least a 32GB of volume storage
Additionally, to follow this guide you will need sudo, admin, or similarly elevated permissions as well as a tool for working with container registries, such as Docker or podman.
Kubernetes has a diverse set of configuration options. The first time you install Cesium ion we recommend following this guide closely to avoid introducing uncertainty in the setup process. Once you are comfortable with the Cesium ion Helm chart and configuration, you can further customize it to your specific needs. |
Installing microk8s
This guide uses microk8s, a lightweight and easy to configure Kubernetes implementation meant for local development. If you have an existing Kubernetes cluster you would like to use, you will need to update the supplied commands to those available with your Kubernetes installation. Skip to the Importing Images section if you are not installing microk8s.
-
Linux
-
Windows
Run the below command to install microk8s using snap.
sudo snap install microk8s --classic
Update permissions and configuration
Once installed, run the below command to add yourself to the microk8s user group
sudo usermod -a -G microk8s $USER
You also need to create a .kube
directory in your home folder to store microk8s configuration. Run the following commands:
mkdir ~/.kube
sudo chown -f -R $USER ~/.kube
sudo microk8s config > ~/.kube/config
Since we made changes to your group settings, you must log out and log back in before continuing.
We recommend using WSL2 to set up a microk8s environment. Use the official provided WSL2 microk8s installation instructions from: https://microk8s.io/docs/install-wsl2
Update permissions and configuration
Once installed, run the below command to add yourself to the microk8s user group
sudo usermod -a -G microk8s $USER
You also need to create a .kube
directory in your home folder to store microk8s configuration. Run the following commands:
mkdir ~/.kube
sudo chown -f -R $USER ~/.kube
sudo microk8s config > ~/.kube/config
Since we made changes to your group settings, you must close out of WSL2 and run it again for the changes to take place.
When working with WSL2, you can easily access files on your root drive but you must use Unix style paths. For example, if your root drive is C:\ you can access files at \mnt\c
|
Verify installation
Verify the installation by running the following command:
microk8s status --wait-ready
This should produce output similar to the following. The important part is "microk8s is running". If you receive an error, review the Installing microk8s section.
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
registry # (core) Private image registry exposed on localhost:32000
disabled:
cert-manager # (core) Cloud native certificate management
community # (core) The community addons repository
dashboard # (core) The Kubernetes dashboard
gpu # (core) Automatic enablement of Nvidia CUDA
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Load balancer for your Kubernetes cluster
minio # (core) MinIO object storage
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorization
storage # (core) Alias to hostpath-storage add-on, deprecated
Verify required features
The default Cesium ion configuration requires dns, helm3, ingress and registry to be enabled. If they are not shown as enabled in the output from the previous command, run the following commands:
microk8s enable dns
microk8s enable helm3
microk8s enable ingress
microk8s enable registry
microk8s config > $HOME/.kube/config
Microk8s installs kubectl
and helm
. You can run them as microk8s kubectl
and microk8s helm
to administer the cluster.
Importing images
Run the below commands to import the images into the microk8s registry add-on. The registry is created at localhost:32000
. Importing these images may take a few minutes for each step. If you are using podman
or other Docker alternative, be sure to update the commands for your tooling.
All commands throughout this documentation are assumed to be executed from the top-level directory where you unpacked the zip, i.e. the directory containing startHere.html .
|
docker image load --input images/cesium-ion-asset-server.tar
docker tag cesiumgs/cesium-ion-asset-server:1.4.0 localhost:32000/cesium-ion-asset-server:1.4.0
docker push localhost:32000/cesium-ion-asset-server:1.4.0
docker image load --input images/cesium-ion-tiling.tar
docker tag cesiumgs/cesium-ion-tiling:1.4.0 localhost:32000/cesium-ion-tiling:1.4.0
docker push localhost:32000/cesium-ion-tiling:1.4.0
docker image load --input images/cesium-ion.tar
docker tag cesiumgs/cesium-ion:1.4.0 localhost:32000/cesium-ion:1.4.0
docker push localhost:32000/cesium-ion:1.4.0
docker image load --input images/cesium-ion-frontend.tar
docker tag cesiumgs/cesium-ion-frontend:1.4.0 localhost:32000/cesium-ion-frontend:1.4.0
docker push localhost:32000/cesium-ion-frontend:1.4.0
docker image load --input images/cesium-ion-job-watcher.tar
docker tag cesiumgs/cesium-ion-job-watcher:1.4.0 localhost:32000/cesium-ion-job-watcher:1.4.0
docker push localhost:32000/cesium-ion-job-watcher:1.4.0
docker image load --input images/postgresql.tar
docker tag bitnami/postgresql:15.4.0-debian-11-r0 localhost:32000/postgresql:15.4.0-debian-11-r0
docker push localhost:32000/postgresql:15.4.0-debian-11-r0
Troubleshooting importing images
Command "docker" not found
If you are running under WSL2, you will need a docker client installed inside of the container, not the host Windows machine. If needed, run the below commands inside of the WSL2 container to install and configure docker.
sudo apt update
sudo apt install docker.io
sudo usermod -a -G docker $USER
exec su -l $USER
Insecure registries
On some versions of Docker, you may receive an error regarding insecure registries. If this happens to you, configure the Docker daemon to allow this action, create or edit /etc/docker/daemon.json to include the following setting:
{
"insecure-registries" : ["localhost:32000"]
}
You also need restart docker using the below command for the changes to take effect:
sudo systemctl restart docker
License configuration
Cesium ion requires a license, which is configured at the top of cesium-ion/values.yaml
. By default it will be an empty string:
license: ""
Install your license by performing the following steps:
-
Download your license file from https://cesium.com/downloads
-
Open the license file in a text editor and copy the entire contents into the clipboard
-
Open
cesium-ion/values.yaml
in a text editor. -
Paste the contents into the
license
string at top of the file
Volume configuration
The default Cesium ion configuration stores all stateful data across five volumes:
-
cesium-ion-sources - User uploaded source data
-
cesium-ion-assets - Processed source data served as Cesium ion assets
-
cesium-ion-stories - Images and other media uploaded as part of Cesium Stories
-
cesium-ion-postgresql - PostgreSQL database containing accounts, Cesium Stories, and asset metadata.
-
cesium-ion-archives - Processed full asset archives and processed clip and ship output
While Kubernetes has a myriad of storage options, we will use local persistent volumes for ease of setup on a single machine. Follow the below steps to configure them:
When editing cesium-ion/values.yaml on Windows, file paths should be from inside the WSL2 shell. For example, paths to your C:\ drive need to start with /mnt/c/ . Additionally, the postgresql volume must live inside of the WSL2 VM, for example at /home/$USER/postgres , and can not reside under /mnt/c .
|
-
Run
microk8s kubectl get nodes
and copy the name of the node. A node in Kubernetes refers to the hardware where the pod is run. In this case your machine will be the only node. -
In
cesium-ion/values.yaml
find thelocalPersistentVolumes
section -
For
node
, replace# REQUIRED: Name returned by "kubectl get nodes"
with the node name from step 1. -
Create a directory for your assets.
-
Under
assets
, replace# REQUIRED: Path to output data on your filesystem.
with the absolute path to the directory you created in step 4. -
Create a directory for your source data. This must be different from previous directories.
-
Under
sources
, replace# REQUIRED: Path to input data on your filesystem.
with the absolute path to the directory you created in step 6. -
Create a directory for your stories images. This must be different from previous directories.
-
Under
stories
, replace# REQUIRED: Path to stories images on your filesystem.
with the absolute path to the directory you created in step 8. -
Create a directory for your archvies. This must be different from previous directories.
-
Under
archives
, replace# REQUIRED: Path to archives data on your filesystem.
with the absolute path to the directory you created in step 10. -
Create a directory for the PostgreSQL database. This must be different from previous directories.
-
Under
postgresql
, replace# REQUIRED: Path to postgres data on your filesystem.
with the absolute path to the directory you created in step 12.
When you are done the localPersistentVolumes
section should contain all the information you need for your install.
localPersistentVolumes:
enabled: true
node: [RESULT OF "get nodes" FROM STEP 1]
assets:
enabled: true
path: [DIRECTORY FROM STEP 4]
capacity: 32Gi
sources:
enabled: true
path: [DIRECTORY FROM STEP 6]
capacity: 32Gi
stories:
enabled: true
path: [DIRECTORY FROM STEP 8]
capacity: 32Gi
archives:
enabled: true
path: [DIRECTORY FROM STEP 10]
capacity: 32Gi
postgresql:
enabled: true
path: [DIRECTORY FROM STEP 12]
capacity: 32Gi
PostgreSQL
Except in advanced use cases, Cesium ion requires a PostgreSQL database. To make initial configuration easier, the Cesium ion Helm chart includes a subchart packaged by Bitnami with a preconfigured user and clear text password. This provides basic configuration for local development, but is not configured for production use.
Once you are comfortable with configuring and installing Cesium ion, you have two options:
-
Connect Cesium ion to your own, externally managed database. See External PostgreSQL configuration in the "Additional Configuration" section.
-
Configure the bundled subchart for production use by referring to its official page on ArtifactHub.
We will continue with the default configuration for this tutorial, but remember that the default configuration is meant for getting up and running with minimal effort. Properly configuring a PostgreSQL database for production use is your responsibility and outside the scope of this document.
Configuring Resources
By default Cesium ion will require 6.5 CPU cores to run the servers and a job monitoring POD. Additionally if you are tiling your own data, each job will use 2 cores.
If you want to run a development setup with fewer CPU cores, you need to update the resources
subsection under assetServer
, apiServer
and frontendServer
to collectively be less than your desired number of cores.
For running a production setup, it is recommended to increase tilingJob
resources to 4 CPU cores. If available, assetServer
and apiServer
resources would benefit from being set to 4 CPU cores as well.
Running the install command
Your cesium-ion/values.yaml
file should now have a complete and valid configuration. Install the chart into a new cesium-ion
namespace by running the following command:
microk8s helm install cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion --create-namespace
This process takes about a minute. There is no requirement to use a specific namespace and we are simply following best practices. Once installation is complete, you should see output similar to the below.
NAME: cesium-ion
LAST DEPLOYED: Sun Nov 19 11:31:17 2023
NAMESPACE: cesium-ion
STATUS: deployed
REVISION: 1
NOTES:
The above indicates success. If you received an error instead, run microk8s helm uninstall cesium-ion --namespace cesium-ion
to ensure any partially installed components are removed. Then review this section to ensure you didn’t miss a step and try again.
Once the output is successful, the NOTES:
section will contain three commands to retrieve the URL of the application. These commands do not mention microk8s
so copy and run the correct version below:
export NODE_PORT=$(microk8s kubectl get --namespace cesium-ion -o jsonpath="{.spec.ports[0].nodePort}" services cesium-ion-frontend)
export NODE_IP=$(microk8s kubectl get nodes --namespace cesium-ion -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
And the output will be a fully qualified URL of the application:
http://10.152.183.244:8080
Visit this URL and the Cesium ion user interface should load. It will look similar to the below image:
If Cesium ion fails to load, uninstall the application and review the above steps again. If your license is expired or invalid, you will instead be redirected to a licensing page with additional information. |
Verification
After loading the application, you can perform a few basic tasks to ensure everything is working correctly.
-
Using a file manager, open the
sampleData
folder included in the installation zip -
Drag and drop
House.laz
into the browser window. -
Cesium ion should detect that you are uploading a
Point Cloud
, clickUpload
-
The asset should upload successfully and you will see an entry appear for it on the
My Assets
page. Progress information will appear in the preview window when the asset is selected. -
Once tiling completes, the asset will load. In this case it’s a small point cloud of a House.
After the initial install, additional changes to
|
Next steps
Congratulations, you now have a working installation of Cesium ion running on Kubernetes. While we recommend you read through this documentation in its entirely at least once, where you go next is up to you:
-
The Additional Configuration section provides an overview of the most common and important options for Cesium ion, such as configuring Single sign-on, configuring an ingress and TLS, using an external PostgreSQL server, and installing Cesium Global 3D Content.
-
The Application architecture section provides an introduction to the overall system architecture, services, jobs, and other important information you should become familiar with.
-
The REST API reference documentation provides information on building clients that integrate directly with Cesium ion.
-
The Advanced Topics section describes how to use the tiling pipeline and asset server container images without Kubernetes, Cesium ion’s data management, user interface, or REST API. This includes instructions for running the tilers from the command line.
Additional configuration
Ingress and TLS
The default configuration for Cesium ion provides for IP-based access over HTTP/1.1. While this is acceptable for experimentation or local development, an ingress should be configured for production use to take advantage of DNS, TLS, caching, and the improved performance provided by HTTP/2 or HTTP/3. Follow the steps below to set one up:
Enable the ingress
-
Open
cesium-ion/values.yaml
. -
Find the
ingress
section and setenabled
to true. -
Directly above the
ingress
section, setlocalNoDNS
to false. -
If you will be using an ingress other than the default, enter it in
className
and updateannotations
if required. Using a non-default ingress is outside of the scope of this documentation so consult your ingress documentation if needed. -
Under the
assetServer
,apiServer
andfrontendServer
sections, underservice:
, changetype:
fromNodePort
toClusterIP
.
Configure DNS
The default Cesium ion configuration creates three user-facing services: One each for the API, the front end, and asset serving. Each of them will require a hostname to work with the ingress. While these names can be anything, it is recommended to provide them a shared domain. In this example we will use:
-
ion.example - User interface
-
api.ion.example - REST API
-
assets.ion.example - Asset server
Decide on the host names you would like to use or use the above. They can be changed later.
-
Open
cesium-ion/values.yaml
. -
For each of the
frontendServer
,apiServer
andassetServer
sections:-
Find the
endpoint
subsection. -
Set
host
to the desired name.
-
Access Cesium ion via DNS
For multi-node clusters a DNS server will need to be configured, which is outside the scope of this document. For local testing and development on a single machine, the hosts
file can be updated to point to your ingress. Let’s do that now to validate the above configuration.
Run the below command to apply the ingress and host changes made in the previous section
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
Then run the following command to get the IP address of the ingress
microk8s kubectl get --namespace cesium-ion ingress
The output will look similar to the following. In this case the Ingress IP is 127.0.0.1
.
NAME CLASS HOSTS ADDRESS PORTS AGE
cesium-ion public assets.ion.example,api.ion.example,ion.example 127.0.0.1 80, 443 20s
Open the hosts file in a text editor. You will need elevated permissions to edit this file:
-
On Linux this file is located in
/etc/hosts
-
On Windows this file is located in
<Root>\Windows\System32\drivers\etc\hosts
Add an entry for each host name with the IP address returned by the above command. For example:
127.0.0.1 ion.example
127.0.0.1 api.ion.example
127.0.0.1 assets.ion.example
Updates to hosts
take effect immediately after the file is saved.
Importing a TLS certificate
While not strictly required, it is strongly recommended to configure TLS to enable support for secure communication. In addition to added security, newer protocols such as HTTP/2 and HTTP/3 provide greatly improved performance but require TLS to be enabled before an ingress can take advantage of them.
While each Cesium ion server can have its own configuration, for simplicity these instructions will use a single certificate for all three servers.
Creating the TLS certificate key pair is outside the scope of this document. Refer to your own internal processes or tools. Ensure the certificate includes the DNS host names you configured in the previous section. Wildcard certificates can also be used.
Once you have the certificate, follow the steps below:
-
Create a file named
certificate.yaml
with the below content. Replace the public and private keys with the values from your certificate.apiVersion: v1 kind: Secret metadata: name: cesiumion-tls-secret stringData: tls.crt: | -----BEGIN CERTIFICATE----- [INSERT YOUR PUBLIC CERTIFICATE] -----END CERTIFICATE----- tls.key: | -----BEGIN PRIVATE KEY----- [INSERT YOUR PRIVATE KEY HERE] -----END PRIVATE KEY----- type: kubernetes.io/tls
-
Install the new secret into your Kubernetes cluster by running the below command. The new secret will be named
cesiumion-tls-secret
with keys oftls.crt
and with keys oftls.key
. Remember to use the same namespace you used when installing Cesium ion.microk8s kubectl create --namespace cesium-ion -f certificate.yaml
-
Open
cesium-ion/values.yaml
. -
For each of the
frontendServer
,apiServer
andassetServer
sections:-
Find the
endpoint
subsection. -
Set
tls
to true. -
Set
tlsSecretName
tocesiumion-tls-secret
-
If you prefer to use multiple certificates, repeat the above process multiple times with a different certificate and secret name for each server.
Upgrade the application and verify the configuration
Once the ingress, DNS, and TLS are configured, upgrade the application by running:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
-
Linux
-
Windows
Navigate to the configuration application URL, for example https://ion.example. The Cesium ion user interface should load and work the same as before. If you encounter an issue, review the above steps and try again.
In order to expose the kubernetes ingress running on your WSL2 VM, you will need to port forward the ingress pod to the host. To perform this action, you will need to have root privileges so that the network binding from kubectl
works.
Run the following command to get the name of the ingress pod:
microk8s kubectl get pods -n ingress
Now expose the kubernetes pod with port-forwarding to the windows host, remember to replace ingress-pod-name
with the name from above:
sudo microk8s kubectl -n ingress port-forward pod/ingress-pod-name --address 127.0.0.1 80:80 443:443
You should see output similar to the following:
Forwarding from 127.0.0.1:80 -> 80
Forwarding from 127.0.0.1:443 -> 443
Navigate to the configuration application URL, for example https://ion.example. The Cesium ion user interface should load and work the same as before. If you encounter an issue, review the above steps and try again.
The port-forward command does not exit until you press CTRL-C . Exiting the port-forward command also means ion will no longer be accessible from your Windows desktop. If you plan on doing Windows-based development, it is recommended you have this process run in the background on start-up.
|
Single sign-on (SSO)
The default configuration for Cesium ion does not include authentication and all users share a single account. To allow users to each have their own account, Cesium ion can integrate with your existing identity provider (IdP) to support SSO via SAML authentication. This can be accomplished by following the steps below:
-
From within your identity provider, configure a new SAML application. This process will vary depending on your IdP. Cesium ion requires the login URL, entity ID, and SAML certificate.
-
Cesium ion also has an Administrator user interface for configuring shared assets and application defaults. Access is granted by specifying a specific IdP attribute name and expected value that signals administrator access. For example if configuring a Google Workspace the attribute would be "Groups" and the value would be the name of the Google Workspace group you created for ion administrators. Administrator does not provide access to other user’s data and is only for configuring shared assets and defaults.
-
Create a file named
samlSecret.yaml
and add asecret
resource with the following contents, replacing the certificate body with your SAML certificate.apiVersion: v1 kind: Secret metadata: name: saml-secret stringData: saml.pem: | -----BEGIN CERTIFICATE----- [INSERT YOUR SAML CERTIFICATE] -----END CERTIFICATE-----
-
Install the new secret into your Kubernetes cluster by running the below command. The new secret will be named
saml-secret
with a key ofsaml.pem
. Remember to use the same namespace you used when installing Cesium ion.microk8s kubectl create --namespace cesium-ion -f samlSecret.yaml
-
Open
cesium-ion/values.yaml
, find theauthenticationMode
property and change it tosaml
-
In the
saml
section update thecertificateSecret
,loginRequestUrl
,entityId
,nameIdFormat
,adminAttributeName
andadminAttributeValue
fields to match those configured in step 1. Thesaml
section should look similar to the followingauthenticationMode: saml # If authenticationMode=saml, these are required saml: # This secret must be created outside the cesium-ion chart # It should contain the SAML certificate for your identity provider certificateSecret: name: "saml-secret" key: "saml.pem" # The SAML URL for your identity provider loginRequestUrl: "https://login.for.your.provider.com/" # The entity ID that was configured in your identity provider when setting up the SAML application entityId: "your-entity-id" # The name ID format to use. Valid values are email, persistent or unspecified. nameIdFormat: "persistent" # Access to the Cesium ion Administrator user interface is granted to any # identity that matches the below name and value criteria # The attribute name to look up. Examples: "roles", "Groups" adminAttributeName: "Groups" # The value that is expected to be found for adminAttributeName. # This value is treated as a semicolon delimited list. # Examples: "admin", "my-admin-group", "users; admin; members" adminAttributeValue: "ion-administrators"
-
Save your changes to
cesium-ion/values.yaml
-
Upgrade the application by running:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
-
When you navigate to the Cesium ion user interface, you will be redirected to your identity provider for authentication before being granted access. If you encounter an issue, review the above steps and try again.
Data uploaded while SSO is disabled is unavailable once SSO is enabled. Similarly, any data uploaded while using SSO becomes unavailable when SSO is disabled. |
Default and shared assets
Cesium ion does not contain any data by default. For example, creating a new story with Cesium Stories will show the Earth as a bright blue WGS84 ellipsoid without terrain or imagery:
You can configure assets, such as Cesium 3D Global Content, so they are available to all users and optionally used by Stories for defaults. If you don’t have any global data, we’ve included two sample datasets as part of the installation zip:
These instructions will help you load these or any other data sets into Cesium ion. Steps vary slightly based on whether you are using Single sign-on, so be sure to choose the tab that matches your configuration.
-
SSO enabled
-
SSO disabled
When using Single sign-on, only users identified as Cesium ion administrators can upload or modify default or shared assets. You can verify that you are in this group by clicking on your username in the upper right and confirming the presence of the Administration
option. When using SSO, the Asset Depot is also the only way to make data available to all users.
-
Click on your username in the upper right and select
Administration
. -
Click
Add Asset
-
Using your preferred file manager, open the
documentation/sampleData/
included with the Cesium ion release zip -
Drag and Drop
BlueMarble.tif
onto this page -
Optionally change the name (this can be changed later)
-
Enter a description, this will be visible to all users (this can be changed later)
-
Toggle
Default
so that it is enabled -
Select
Imagery
underWhat kind of data is this?
-
Click
Upload
-
Click
Add Asset
again -
Drag and Drop
GTOPO30.tif
on this page -
Optionally change the name (this can be changed later)
-
Enter a description, this will be visible to all users (this can be changed later)
-
Toggle
Default
so that it is enabled -
Select
Terrain
underWhat kind of data is this?
-
Leave everything else as the default and click
Upload
-
You can monitor the progress of each asset from this page while they tile. The process will take about ~12 minutes using the default Cesium ion tiling job allocations. Once both assets complete tiling continue to the next step
-
Click
App Settings
along the top navigation menu -
Click
Story default imagery
and selectBlue Marble
-
Click
Story default terrain
and selectGTOPO30
-
Click
Asset viewer imagery
and selectBlue Marble
Every user that logs into Cesium ion will now have the same defaults for Cesium Stories and the asset preview window. Any Asset Depot assets marked as "Default" will automatically be added to the user’s My Assets page the first time they log in.
When not using Single Sign-on, data is shared among all users via the My Assets
page and any user can upload and change any asset.
-
Navigate to the
My Assets
page -
Using your preferred file manager, open the
documentation/sampleData/
included with the Cesium ion release zip -
Drag and Drop
BlueMarble.tif
anywhere onto the Cesium ion application, which will bring you to theAdd Data
page -
Optionally change the name (this can be changed later)
-
Select
Imagery
from the drop down -
Click
Upload
-
Drag and Drop
GTOPO30.tif
anywhere onto the Cesium ion application. -
Optionally change the name (this can be changed later)
-
Select
Terrain
from the drop down -
Leave other options as the default and click
Upload
-
You will now be back on the
My Assets
page. You can click on eitherBlueMarble
orGTOPO30
to monitor the tiling progress of each asset. GTOPO30 takes the longest, about ~12 minutes, using the default Cesium ion tiling job allocations. Once both assets complete continue to the next step -
Click
App Settings
along the top navigation menu -
Click
Story default imagery
and selectBlueMarble
-
Click
Story default terrain
and selectGTOPO30
-
Click
Asset viewer imagery
and selectBlueMarble
After performing the above steps, creating a Cesium Story will now use the configured default assets:
Additionally, when previewing other assets on the My Assets page, Blue Marble
will be used as the default base layer.
You may have noticed we did not select anything for the Story default buildings setting. This option is meant exclusively for the Cesium OSM Buildings tileset, available for purchase separately.
|
Cesium 3D Global Content
In most cases, 3D Tiles tilesets created outside of Cesium ion or purchased from a third-party can be imported into Cesium ion using the same upload process used as untiled source data. However, when dealing with large existing tilesets spanning hundreds of gigabytes or terabytes, such as Cesium 3D Global Content, it is not ideal to upload these through the user interface.
Instead, you can place the data in a volume accessible to the Cesium ion installation and tell Cesium ion about them. This is accomplished by running the Cesium provided importData
job template. Cesium ion will not make an additional copy of the data, it will serve it from the provided location.
For this guide, we will use the default assets-volume
volume mount we already created as part of the Volume Configuration section of Getting Started.
Create a subdirectory under the assets-volume
location, for example imported
. And then follow instructions for the relevant data below.
Installing Cesium World Terrain
-
Copy the Cesium World Terrain file to the
imported
directory you created above. For exampleimported/cesium_world_terrain_v1.2.terraindb
. You can rename the file if desired. It will not be visible elsewhere in the system. -
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-world-terrain.yaml
-
Open
import-cesium-world-terrain.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-cesium-world-terrain
-
Update
spec.template.spec.args
to the relative location of the data:imported/cesium_world_terrain_v1.2.terraindb
-
The section should look similar to the below:
command: - bin/install-cesium-world-terrain args: - imported/cesium_world_terrain_v1.2.terraindb
-
Save
import-cesium-world-terrain.yaml
and run the job to import the assetmicrok8s kubectl create --namespace cesium-ion -f import-cesium-world-terrain.yaml
Installing Cesium World Bathymetry
-
Copy the Cesium World Bathymetry file to the
imported
directory you created above. For exampleimported/cesium_world_bathymetry_v1.0.terraindb
. You can rename the file if desired. It will not be visible elsewhere in the system. -
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-world-bathymetry.yaml
-
Open
import-cesium-world-bathymetry.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-cesium-world-bathymetry
-
Update
spec.template.spec.args
to the relative location of the data:imported/cesium_world_bathymetry_v1.0.terraindb
-
The section should look similar to the below:
command: - bin/install-cesium-world-bathymetry args: - imported/cesium_world_bathymetry_v1.0.terraindb
-
Save
import-cesium-world-bathymetry.yaml
and run the job to import the assetmicrok8s kubectl create --namespace cesium-ion -f import-cesium-world-bathymetry.yaml
Installing Sentinel-2 Imagery
-
Copy the Sentinel-2 sqlite database to the
imported
directory you created above. For exampleimported/s2cloudless-2021_4326_v1.0.0_with_index.sqlite
. You can rename the file if desired. It will not be visible elsewhere in the system. -
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-sentinel-2.yaml
-
Open
import-sentinel-2.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-sentinel-2
-
Update
spec.template.spec.args
to the relative location of the data:imported/s2cloudless-2021_4326_v1.0.0_with_index.sqlite
-
The section should look similar to the below:
command: - bin/install-sentinel-2 args: - imported/s2cloudless-2021_4326_v1.0.0_with_index.sqlite
-
Save
import-sentinel-2.yaml
and run the job to import the assetmicrok8s kubectl create --namespace cesium-ion -f import-sentinel-2.yaml
Installing Cesium OSM Buildings
-
Copy the Cesium OSM Buildings 3D Tiles database to the
imported
directory you created above. For exampleimported/planet-cwt-240304.3dtiles
. You can rename the file if desired. It will not be visible elsewhere in the system. -
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-osm-buildings.yaml
-
Open
import-cesium-osm-buildings.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-cesium-osm-buildings
-
Update
spec.template.spec.args
to the relative location of the data:imported/planet-cwt-240304.3dtiles
-
The section should look similar to the below:
command: - bin/install-cesium-osm-buildings args: - imported/planet-cwt-240304.3dtiles
-
Save
import-cesium-osm-buildings.yaml
and run the job to import the assetmicrok8s kubectl create --namespace cesium-ion -f import-cesium-osm-buildings.yaml
Installing Cesium Moon Terrain
-
Copy the Cesium Moon Terrain file to the
imported
directory you created above. For exampleimported/cesium_moon_terrain_v1.0.3dtiles
. You can rename the file if desired. It will not be visible elsewhere in the system. -
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-cesium-moon-terrain.yaml
-
Open
import-cesium-moon-terrain.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-cesium-moon-terrain
-
Update
spec.template.spec.args
to the relative location of the data:imported/cesium_moon_terrain_v1.0.3dtiles
-
The section should look similar to the below:
command: - bin/install-cesium-moon-terrain args: - imported/cesium_moon_terrain_v1.0.3dtiles
-
Save
import-cesium-moon-terrain.yaml
and run the job to import the assetmicrok8s kubectl create --namespace cesium-ion -f import-cesium-moon-terrain.yaml
Installing other tilesets
-
Copy the tileset you would like to deploy into the
imported
directory. The data must be in Cesium Terrain Database (.terraindb), 3D Tiles Database (.3dtiles), or GeoPackage Tiles (.gpkg) format. You can rename the file if desired. It will not be visible elsewhere in the system. -
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-tileset.yaml
-
Open
import-tileset.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/add-asset
-
Update
spec.template.spec.args
to include the required command line options.-
--location The relative location of the asset within the volume.
-
--type The type of asset. Valid options are
3DTILES
,IMAGERY
,TERRAIN
-
--name The name of the asset. This can be changed later from with the Cesium ion user interface
-
--description The description of the asset. This can be changed later from with the Cesium ion user interface
-
--attribution The attribution of the asset. This can be changed later from with the Cesium ion user interface
-
--is-default If specified, sets the asset as a default asset to be automatically added to every account when single sign-on is enabled.
-
--quick-add If specified, adds the asset to the Quick Add list returned by
/v1/defaults
. This list is used by Cesium native clients to show a list of common assets to end users.
-
-
The section should look similar to the below:
command: - bin/add-asset args: - "--location" - "imported/<FILENAME>" - "--type" - "3DTILES" - "--name" - "Required name" - "--description" - "Optional description" - "--attribution" - "Optional attribution" - --is-default
-
Save
import-tileset.yaml
and run the job to import the assetmicrok8s kubectl create --namespace cesium-ion -f import-tileset.yaml
Verifying imported assets
When an identity provider is configured, assets are added to the Asset Depot and available to all accounts. When running without an identity provider, assets are added directly to My Assets
.
If the imported asset does not show up in the Cesium ion user interface, inspect the Kubernetes logs created by each job, which should have actionable information as to what went wrong.
If you imported Cesium World Terrain, Cesium OSM Buildings, or Sentinel-2, you may want to configure them as the Cesium Stories default assets. See the next section for instructions on doing so.
External PostgreSQL configuration
The Cesium ion default configuration installs PostgreSQL through a configurable subchart packaged by Bitnami. It provides basic configuration to get up and running, but it is not configured for production use. While you can configure the included subchart yourself by referring to its official page on ArtifactHub, another option is connecting Cesium ion to a separately managed PostgreSQL server.
While setting up an external server is beyond the scope of this document, follow the below instructions to have Cesium ion use it.
-
Create your connection string, which is the format of
pg://username:password@hostname:port/databaseName
. For example if your settings are:-
username: db_user
-
password: 12345
-
hostname: staging-db
-
port: 5432
-
databaseName: cesium_ion
then the connection string would be
pg://db_user:12345@staging-db:5432/cesium_ion
.
-
-
Open
cesium-ion/values.yaml
find theconnectionString
section which is under theapiServer
section. It should look similar to the belowconnectionString: value: "" secret: name: "" key: ""
-
The
value
field allows you to place the connection string directly in the configuration file. This is useful for testing and validating your DB connection but not recommended for security reasons. Instead, let’s create a new secret with the connection string and use that instead. -
Create a new file called
connection.yaml
with the following content, replacing the connection with your own connection string.apiVersion: v1 kind: Secret metadata: name: cesiumion-connectionString stringData: connection: "[CONNECTION STRING FROM STEP 1]" type: Opaque
-
Run
microk8s kubectl create --namespace cesium-ion -f connection.yaml
to install the secret. Remember to use the same namespace you used when installing Cesium ion. -
Edit the
cesium-ion/values.yaml
connectionString
section with the name and key. In this example it should look something like:connectionString: value: "" secret: name: "cesiumion-connectionString" key: "connection"
-
Upgrade the application by running:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
Using a different container registry
In the default configuration, we enabled the microk8s
local container registry on localhost:32000
. If using anything other than microk8s
you will most likely need to update the configuration to point to your own container registry.
-
Open
cesium-ion/values.yaml
. -
For each of the
frontendServer
,apiServer
,assetServer
, andtilingJob
sections underimage
:-
Update
registry
to point to your registry -
Update
repository
to point to the relevant container in your registry -
If you would like to customize the
pullPolicy
ortag
properties, you can do so now.
-
Imports from and exports to S3
To allow users of your application to import items from S3, set the s3AssetImport
flag in your values file in the features
section to true
. Assets imported from S3 don’t store their source data on one of the mounted volumes, this data is imported to the ephemeral working directory mounted on an individual tiling job. Make sure the workingDirectorySize
specified in your values file under the tilingJob
section is large enough to download this data.
Similarly to allow your users to export the tiled assets to a S3 bucket, set s3AssetExport
flag in your values file in the features
section to true
. These options are turned off in the default configuration.
Run the below command to apply any changes to the S3 configuration that you may have made:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
Asset archiving
To allow users of your application to create archives of their data, set the fullArchives
flag in your values in the features
section to true
. This also requires you to mount the archives volume since that is the location these are archives are stored
Run the below command to apply any changes to the archiving configuration that you may have made:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
Connecting to Cesium ion SaaS
If your installation of Self-Hosted is connected to the public internet you can use the following features in Self-Hosted by connecting your Cesium ion account to Self-Hosted.
-
Geocoding
-
Google Photorealistic 3D Tiles
-
Bing Imagery
Linking ion Account
Self-Hosted ion will connect to single Cesium ion account using an access token. The usage of Cesium ion features all Self-Hosted users will be counted against this account.
-
In your Cesium ion account create an access token with the
geocode
andasset-read
functionality. -
In the
apiServer
section ofvalues.yaml
setionAccessToken
to the value of your token from step 1. -
Run the following command to update your configuration:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
Geocoding
Geocoding is automatically activated when you connect a Cesium ion account to a Self-Hosted instance.
Activating Google Photorealistic 3D Tiles
To activate Google Photorealistic 3D Tiles, your Self-Hosted instance must be linked to a Cesium ion account.
-
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-google.yaml
-
Open
import-google.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-google
. -
This import script does not require any args, so remove the args section from the outputted file.
-
Save
import-google.yaml
and run the job to activate using Google Photorealistic 3D Tiles from Cesium ionmicrok8s kubectl create --namespace cesium-ion -f import-google.yaml
Activating Bing Imagery
To activate Bing Imagery, your Self-Hosted instance must be linked to a Cesium ion account.
-
Generate the import data template by running:
microk8s kubectl get configmap cesium-ion-jobs --namespace cesium-ion -o=jsonpath="{.data.importData}{'\n'}" > import-bing.yaml
-
Open
import-bing.yaml
in a text editor. -
Update
metadata.name
to be a unique string. -
Update
spec.template.spec.command
tobin/install-bing
. -
This import script does not require any args, so remove the args section from the outputted file.
-
Save
import-bin.yaml
and run the job to activate using Bing Imagery from Cesium ionmicrok8s kubectl create --namespace cesium-ion -f import-bing.yaml
Upgrading from a previous release
Upgrading from a previously release is a three step process:
-
Review and update values.yaml
-
Import updated images into your container registry
-
Execute the
helm upgrade
command
During the upgrade there will be a new deployment of the API, front-end, and asset services as well as a schema migration of the PostgreSQL database. The process only takes a few minutes once initiated. Downgrading to a previous release is not supported and it is your responsibility to establish a backup and restore strategy for your cluster.
Upgrading from 1.x.x to 1.4.0
Update your values.yaml file
The values.yaml
file in 1.4.0 contains a variety of changes to support the newly added features. To allow for a smooth upgrade and to re-use your existing values.yaml
file, we recommend using the yq utility.yq
can allow merging your existing values.yaml
file with the one in version 1.2.0 to avoid cumbersome changes in multiple places. Execute the following command to create a merged file:
yq -n -P 'load("values-1.4.0.yaml") *? load("values-current.yaml")' > values.yaml
Make sure the newly generated values.yaml
file is inside the cesium-ion
directory. Also update the volume configuration for the newly required archives volume under the localPersistentVolumes
section by following the Volume Configuration guide.
Import updated images into your container registry
Importing updated container images uses the same commands from the initial setup. If you are using podman
or other Docker alternative, be sure to update the commands for your tooling. Remember to run commands in the top-level directory where you unpacked the zip:
docker image load --input images/cesium-ion-asset-server.tar
docker tag cesiumgs/cesium-ion-asset-server:1.4.0 localhost:32000/cesium-ion-asset-server:1.4.0
docker push localhost:32000/cesium-ion-asset-server:1.4.0
docker image load --input images/cesium-ion-tiling.tar
docker tag cesiumgs/cesium-ion-tiling:1.4.0 localhost:32000/cesium-ion-tiling:1.4.0
docker push localhost:32000/cesium-ion-tiling:1.4.0
docker image load --input images/cesium-ion.tar
docker tag cesiumgs/cesium-ion:1.4.0 localhost:32000/cesium-ion:1.4.0
docker push localhost:32000/cesium-ion:1.4.0
docker image load --input images/cesium-ion-frontend.tar
docker tag cesiumgs/cesium-ion-frontend:1.4.0 localhost:32000/cesium-ion-frontend:1.4.0
docker push localhost:32000/cesium-ion-frontend:1.4.0
docker image load --input images/cesium-ion-job-watcher.tar
docker tag cesiumgs/cesium-ion-job-watcher:1.4.0 localhost:32000/cesium-ion-job-watcher:1.4.0
docker push localhost:32000/cesium-ion-job-watcher:1.4.0
docker image load --input images/postgresql.tar
docker tag bitnami/postgresql:15.4.0-debian-11-r0 localhost:32000/postgresql:15.4.0-debian-11-r0
docker push localhost:32000/postgresql:15.4.0-debian-11-r0
Refer to Importing images section from the Getting Started for details and troubleshooting.
Execute the helm upgrade command
Once you have updated values.yaml
and imported the latest container images into your registry, run the following command to perform the upgrade process:
microk8s helm upgrade cesium-ion cesium-ion/ --wait --values cesium-ion/values.yaml --namespace cesium-ion
The command will not exit until the upgrade is complete or fails. On success, the output will be similar to the initial install and contain a message that starts similar to:
Release "cesium-ion" has been upgraded. Happy Helming!
NAME: cesium-ion
LAST DEPLOYED: Sat Apr 13 09:22:50 2024
NAMESPACE: cesium-ion
STATUS: deployed
REVISION: 2
Application architecture
In Getting Started, you installed the Cesium ion Helm chart which consists of several deployments and other components you should familiarize yourself with in order to operate a production Cesium ion installation. A default configuration with ingress enabled will look similar to the below:
In the above diagram there are three deployments, one each for front end, assets, and API services. There is also a PostgreSQL database and two jobs, one for daily maintenance and another for running the tiling pipeline. Finally, four persistent volume claims are used to store all data associated with Cesium ion. Continue reading to learn more about these components.
The cesium-ion- prefix is used in the above diagram and throughout the documentation is generated from the name of the Helm application at install time. If you selected a different name when running helm install , such as ion-dev , all of the components will be prefixed as ion-dev- instead. For example: cesium-ion-frontend → ion-dev-frontend .
|
Frontend deployment
The Cesium ion user interface is a statically hosted website with minimal additional configuration options and no external storage requirements. Within the cluster, all components specific to the front end are named or prefixed with cesium-ion-frontend
.
While a single node can handle reasonable workloads, standard autoscaling options are available and should be enabled in production deployments. See the Frontend Server section of values.yaml
for more additional details.
Assets deployment
The Cesium ion asset server is responsible for serving and securing assets and other data created by the tiling process. It also serves additional media, such as images uploaded through Cesium Stories. Within the cluster, all components specific to the asset server are named or prefixed with cesium-ion-assets
.
The asset server reads data from the /data/assets
and /data/stories
mount paths. In Getting Started, you configured local persistent volume claims for each of these, assets-volume
and stories-volume
, but for multi-node or production deployments a different volume type should be used. All Kubernetes volume types are supported.
While a single node can handle reasonable workloads, standard autoscaling options are available and should be enabled in production deployments. Asset serving performance is the most critical component of Cesium ion when it comes to streaming data to end users at scale. See the asset server section of values.yaml
for additional details.
API deployment
The Cesium ion API server is a stateless server responsible for all business logic and data management. It is used by the front end user interface but also serves as the REST API server used by plugins, applications, or workflows that integrate with Cesium ion. Within the cluster, all components specific to the API server are named or prefixed with cesium-ion-api
.
The API server needs both read and write access to the /data/assets
and /data/stories
mount paths. These need to be the same volumes mounted to the asset server, above. A third mount path, /data/sources
is used to store all raw source data uploaded by end users. In Getting Started, you configured a local persistent volume claim, sources-volume
, which should be customized for your specific deployment needs.
While a single node can handle reasonable workloads, standard autoscaling options are available and should be enabled in production deployments. See the asset server section of values.yaml
for additional details.
Daily Maintenance
Some operations, such as permanently removing data previously tagged for delete, run as part of a Kubernetes CronJob using the same configuration as cesium-ion-api
. By default, this process runs once a day and should take anywhere from a few seconds to a few minutes to complete. See the maintenance
section under API Server
in values.yaml
for the full list of options.
Tiling template
Cesium ion uses the Kubernetes Jobs system to tile data with the 3D tiling pipeline. No pods are dedicated to the pipeline when data is not being processed. Within the cluster, all components specific to the tiling pipeline are named or prefixed with cesium-ion-tiling
.
Tiling jobs needs read access to the /data/source
mount path and read/write access to the /data/assets
mount path. These should be the same volumes used by the asset and api servers. Tiling jobs also have a temporary scratch volume, working-directory
, which is an emptyDir
created locally on the node. Because of this, local Node storage performance can have a direct impact on tiling performance.
The resources
section under Tiling Job
in values.yaml
contains reasonable default values for CPU, memory, and working directory storage used for each tiling job, but you should consider fine tuning these values based on the type of data you expect to tile with Cesium ion. In many cases you will be able to scale up or down resources to either improve performance or reduce resource cost.
PostgreSQL subchart
Cesium ion requires a PostgreSQL database. If you decide to use the included Bitnami subchart, all components specific to the database will be named or prefixed with cesium-ion-postgresql
. A volume claim of the same name will be created and used.
As mentioned in the Getting Started, configuring a production PostgreSQL database is outside the scope of this documentation. See External PostgreSQL configuration for details.
Job watcher pod
The job watcher pod is a singular pod that watches for changes to the state of Cesium ion’s tiling jobs and pods in the namespace. It’s function is to catch any events that may cause a discrepancy between the API server and the status of the tiling job. For example, in a case where a job pod may be killed due to the node being out of memory, the watcher pod detects the event and communicates it to the API server since the job pod itself cannot do so.
Backup and restore
All data from Cesium ion is persisted to the four claims outlined above: cesium-ion-sources
, cesium-ion-assets
, cesium-ion-stories
, and cesium-ion-postgresql
. Additionally, a secret named cesium-ion-secrets
is generated at install time and is critical to the operation of the application. It contains the signing key used for user-generated API access tokens. Changing or losing access to this secret will invalidate all API access tokens without a chance for recovery. This secret is retained if Cesium ion is uninstalled and re-used when reinstalled with the same application name and namespace but care will need to be taken if restoring from a backup.
You can retrieve the secret by running the below command:
microk8s kubectl get secret cesium-ion-secrets -n cesium-ion -o=yaml
While establishing a full data backup and recovery process for your Kubernetes cluster is outside the scope of this document, as long as the secret and volumes associated with these four claims are backed up and restored, a Cesium ion installation can be torn down and recreated without loss of data. It is your responsibility to establish a backup and restore strategy for your cluster.
Advanced use cases
While the Cesium ion configuration options detailed in the Getting Started and Additional Configuration sections cover the majority of use cases, Cesium ion’s 3D tiling pipeline and asset server were designed as scalable container images that can be used without the Cesium ion REST API and user interface. These images are stateless, do not depend on an external database, and can be used to build highly customized workflows and applications that do not rely on Kubernetes. Some examples of when you may want to use these components include:
-
Tiling data from the command line, without any additional infrastructure or servers
-
Tiling and serving data using a different container orchestrator
-
Serving data, such as Cesium 3D Global Content, without any additional infrastructure
-
Implementing workflows that comply with internal policies as to how data is accessed, stored, and managed
-
Embed tiling capabilities directly into any containerized application
How exactly you leverage the Cesium ion 3D tiling pipeline and asset server containers to meet your needs is up to you, keep reading for a detailed tutorial on using them.
Tiling and serving data without Kubernetes
The cesium-ion-asset-server
and cesium-ion-tiling
container images can be used to run the server and tiling processes in any OCI-compliant implementation such as Docker or Podman. This guide uses docker
, but feel free to use your own tools and adjust the command lines as needed.
-
Start by running the below commands to load both images into your default registry:
docker load -i images/cesium-ion-asset-server.tar docker load -i images/cesium-ion-tiling.tar
Note the full name and tag that is displayed in the output, for example
cesiumgs/cesium-ion-asset-server:1.0.0
andcesiumgs/cesium-ion-tiling:1.0.0
-
Create a new empty directory anywhere on your system. For this guide we’ll use
~/myTilesets
Running the asset server
We can now run the server using below the command line. You will need to replace license
with the same string you added in cesium-ion/values.yaml
. You can also set a global environment variable on your system and replace -e CESIUM_LICENSE=license
with -e CESIUM_LICENSE
to make it easier to run in the future.
docker run --rm \
-e CESIUM_LICENSE=license \
-p 8070:8070 \
-v ~/myTilesets:/tilesets \
cesiumgs/cesium-ion-asset-server:1.0.0 \
-d /tilesets \
--development-mode \
--cors
After the server starts, navigate to http://localhost:8070/ and you should see the developer landing page:
The top of the page should indicate your license is valid. If not, stop the server, double check the license string, and run again.
Here is the full explanation for each part of the command:
-
docker run --rm
- tells docker to run the container and destroy it as soon as it exits. -
-e CESIUM_LICENSE=license
- specifies the Cesium ion license to use -
-p 8070:8070
- exposes container port 8070 on the host at 8070 -
-v ~/myTilesets:/tilesets
- mounts the host~/myTilesets
directory you created into the/tilesets
directory inside of the container -
cesiumgs/cesium-ion-asset-server:1.0.0
- specifies the asset server container image to run -
-d /tilesets
specifies the path inside of the container to look for tilesets -
--development-mode
- enables development mode -
--cors
- enables Cross-Origin Resource Sharing (CORS)
The landing page is only available when --development-mode
is specified and also provides several quality of life features to aid developers during application development:
-
Easy access to this reference documentation
-
A basic application to browse and view tilesets
-
CORS is enabled and configured to allow any web client to connect
-
Caching is completely disabled
Leave the server running in the background, in the next section we’ll create tilesets using the 3D tiling pipeline.
In production, the asset server should always run behind an ingress, load balancer, or CDN. |
Tiling data
The server we started in the previous section is serving data located in ~/myTilesets
. Let’s use the sample data shipped with the Cesium ion Self-Hosted package to create a tileset.
This guide assumes you unpacked the Cesium ion release zip into ~/cesium-ion-1.0.0/
and that the sample data is located in ~/cesium-ion-1.0.0/sampleData/
. Be sure to update the commands if you are using a different location.
Let’s tile a 3D reality model by running the following command. As with the asset server, you will need to replace license
with your Cesium ion license string. You can also set a global environment variable on your system and replace -e CESIUM_LICENSE=license
with -e CESIUM_LICENSE
to make it easier to run in the future.
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/Office_Park/Office_Park.obj \
--input-type 3D_CAPTURE \
-o /output/Office_Park \
--output-type 3DTILES
You will see a lot of logging information output across the screen. Once the command is complete, ~/myTilesets/Office_Park
will have been created and is a 3D Tiles tileset of the input data.
Here is an explanation of each part of the above command:
-
docker run --rm
- run the container and destroy it as soon as it completes -
-e CESIUM_LICENSE=license
- Configured the license. You will need to replacelicense
with the same string you added incesium-ion/values.yaml
. If you are running the tiler this way often, you can also set a global environment variable on your system and replace-e CESIUM_LICENSE=license
with-e CESIUM_LICENSE
-
-v ~/myInput:/input
- Mount the host~/myInput
directory on/input
inside of the container -
-v ~/myOutput/:/output
- Mount the host~/myOutput
directory on/output
inside of the container -
cesiumgs/cesium-ion-tiling:1.0.0
- the tiling pipeline image to run -
bin/runJob.js
- the script that actually executes the pipeline -
-i /input/Courtyard.tif
The path to the input data from inside of the container -
--input-type RASTER_IMAGERY
- The type of input data -
-o /output/Courtyard
- The path to the output file from inside of the container -
--output-type IMAGERY
- The type of output being produced
The tiling pipeline always produces a single sqlite3 database as output, which can then be hosted by the asset server.
Go back to the asset server at http://localhost:8070/ and click View all tilesets
. You should see Office_Park
show up in the list:
Clicking on Office_Park
will load it into a basic CesiumJS viewer.
To ensure the 3D tileset is being served out, the root tileset.json
file can be previewed in a browser via the URL http://localhost:8070/v1/3dtiles/Office_Park/tileset.json
. This will display the root tileset.json
file.
The tileset can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/Office_Park/tileset.json
url.
Let’s continue to process the remaining sample data.
Imagery
Cesium ion supports tiling of one or more raster imagery files into a tileset. To tile the Courtyard.tif
sample imagery, use the below command line:
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/Courtyard.tif \
--input-type RASTER_IMAGERY \
-o /output/Courtyard \
--output-type IMAGERY
Once the command completes, go back to the View all tilesets
page and refresh the page. You should now see Courtyard
listed in the assets. Click on it to view the data.
It should look similar to the below image:
To ensure the TileMapService (TMS) imagery tileset is being served out, the root tilemapresource.xml
file can be navigated to in a browser via the URL http://localhost:8070/v1/imagery/Courtyard/tilemapresource.xml
. This will download the root tilemapresource.xml
file.
This is a TileMapService (TMS) imagery tileset that can be loaded into any application that supports TMS layout via the http://localhost:8070/v1/imagery/Courtyard/
url.
Terrain
Cesium ion supports tiling of one or more raster terrains into a single tileset.
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/ZionNationalPark.tif \
--input-type RASTER_TERRAIN \
-o /output/ZionNationalPark \
--output-type TERRAIN
Once the command completes, go back to the View all tilesets
page and refresh the page. You should now see ZionNationalPark
listed in the assets. Click on it to view the data.
It should look similar to the below image:
To ensure the terrain tileset is being served out, the root layer.json
file can be navigated to in a browser via the URL http://localhost:8070/v1/terrain/ZionNationalPark/layer.json
. This will preview the root layer.json
file.
This is a quantized-mesh terrain tileset that can be loaded into any application that supports quantized-mesh via the http://localhost:8070/v1/terrain/ZionNationalPark/
url.
Point clouds
Cesium ion supports tiling of one or more LAS or LAZ point clouds into a single tileset.
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/House.laz \
--input-type POINT_CLOUD \
-o /output/House \
--output-type 3DTILES
Once the command completes, go back to the View all tilesets
page and refresh the page. You should now see House
listed in the assets. Click on it to view the data.
It should look similar to the below image:
This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/House/tileset.json
url.
Arbitrary 3D Models
Cesium ion supports tiling of one or more glTF, DAE, FBX, or OBJ models into a single tileset. This option should be used whenever the model is not a reality model or similarly captured 3D data. One example is BIM and CAD models.
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/OfficePlan/OfficePlan.obj \
--input-type 3D_MODEL \
-o /output/OfficePlan \
--output-type 3DTILES
Once the command completes, go back to the View all tilesets
page and refresh the page. You should now see OfficePlan
listed in the assets. Click on it to view the data.
It should look similar to the below image:
This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/OfficePlan/tileset.json
url.
CityGML
Cesium ion supports tiling of one or more CityGML files into a single tileset.
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/Reichstag/Reichstag.gml \
--input-type CITYGML \
-o /output/Reichstag \
--output-type 3DTILES
Once the command completes, go back to the View all tilesets
page and refresh the page. You should now see Reichstag
listed in the assets. Click on it to view the data.
It should look similar to the below image:
This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/Reichstag/tileset.json
url.
KML/COLLADA
Cesium ion supports tiling KML/COLLADA files, which are a subset of the KML specification used for exporting building models from many tools.
docker run --rm \
-e CESIUM_LICENSE=license \
-v ~/cesium-ion-1.0.0/sampleData/:/input \
-v ~/myTilesets:/output \
cesiumgs/cesium-ion-tiling:1.0.0 \
bin/runJob.js \
-i /input/Office_Park_KML/doc.kml \
--input-type KML \
-o /output/Office_Park_KML \
--output-type 3DTILES
Once the process completes, go back to the View all tilesets
page and refresh the page. You should now see Office_Park_KML
listed in the assets. Click on it to view the data.
It should look similar to the below image:
This is a 3D Tiles tileset that can be loaded into any application that supports 3D Tiles via the http://localhost:8070/v1/3dtiles/Office_Park_KML/tileset.json
url.
Tiling Scripts
Cesium ion Self-Hosted ships with tiling scripts that allow you to execute the above tiling functions with ease. These tiling scripts are designed to provide a clean interface on top of the existing tiling pipeline exposed through the container.
The license key can be specified one of four ways (in order of priority):
-
passing in the license key with the
--license
flag -
passing in the path to a license file with the
--license
flag -
setting the environment variable
CESIUM_LICENSE
to the license key -
adding the
license
file to the same directory as the script
The container image (in [registry/]image[:tag]
format) can be specified one of three ways (in order of priority):
-
using the
--image
flag -
setting the environment variable
CESIUM_TILING_IMAGE
-
updating the
default-image.txt
in the scripts directory
To understand more, execute the bash script of your choice with the --help
flag under the scripts folder. e.g scripts/model-tiler --help
Next steps
The Cesium ion pipeline and asset server container images are powerful building blocks that can be used to create custom pipelines that scale to handle massive and disparate 3D geospatial datasets and allow you to concentrate on the unique value your application provides.
-
For the full list of command line options and tiling features available, see the 3D tiling pipeline reference
-
For the full list of command line options and API routes available to the asset server, see the Asset server reference
3D tiling pipeline reference
Common options
The following options are shared across all tiling jobs.
option | value | required | description |
---|---|---|---|
--input-type |
|
Yes |
The type of source data to be tiled. Only certain combinations of input type and output type are supported, see the specific section for each type of data below for details. |
--input |
string |
Yes |
The path to one or more files to process. Globs are supported for selecting a large amount of files at once. Files must be available under the Zip files are also supported and will be automatically decompressed before processing. S3 urls in the form of |
--output-type |
|
Yes |
The type of tileset to produce. Only certain combinations of input type and output type are supported, see the specific section for each type of data below for details. |
--output |
string |
Yes |
The path to the output file. The file must be written to a path under the An S3 url in the form of |
--progress-url |
string |
Yes |
An optional url for the tiling job to POST messages about tiling progress. See Monitoring Progress for more details. |
Currently the 3D tiling pipeline supports the following source types:
Reality models
A 3D Tiles tileset can be created from one or more 3D Model files by specifying --input-type 3D_CAPTURE
and --output-type 3DTILES
.
The following model formats are supported:
-
Wavefront OBJ (.obj)
-
glTF (.gltf, .glb)
-
Filmbox (.fbx)
-
COLLADA (.dae)
The 3D_CAPTURE input type is meant specifically for large mesh data typically derived from point clouds or photogrammetric processes. See Arbitrary models for tiling model data that does not fit this description.
|
Reality model-specific command-line options:
option | value | default | description |
---|---|---|---|
--geometry-compression |
|
|
Controls the type of compression applied to geometry when creating a 3D Tileset.
|
--position |
Array of Numbers |
N/A |
The origin of the tileset in [longitude, latitude, height] format in EPSG:4326 coordinates and height in meters. This value is ignored if the source data already contains georeferencing information. |
--texture-format |
|
|
Controls the format of textures in the 3D Tiles tileset.
|
Arbitrary models
A 3D Tiles tileset can be created from one or more 3D Model files by specifying --input-type 3D_MODEL
and --output-type 3DTILES
.
The following model formats are supported:
-
Wavefront OBJ (.obj)
-
glTF (.gltf, .glb)
-
Filmbox (.fbx)
-
COLLADA (.dae)
The 3D_MODEL input type is meant for traditional 3D models, such as CAD, BIM, or other human made designs. See Reality models for tiling 3D captures or other large scale meshes.
|
Model-specific command-line options:
option | value | default | description |
---|---|---|---|
--geometry-compression |
|
|
Controls the type of compression applied to geometry when creating a 3D Tileset. |
--position |
Array of Numbers |
N/A |
The origin of the tileset in [longitude, latitude, height] format in EPSG:4326 coordinates and height in meters. This value is ignored if the source data already contains georeferencing information. |
--texture-format |
|
|
Controls the format of textures in the 3D Tiles tileset.
|
Point clouds
A 3D Tiles tileset can be created from one or more point cloud files by specifying --input-type POINT_CLOUD
and --output-type 3DTILES
.
LASer (.las, .laz) formats are supported
Point cloud-specific command-line options:
option | value | default | description |
---|---|---|---|
--geometry-compression |
|
|
Controls the type of compression applied to geometry when creating a 3D Tileset. |
--position |
Array of Numbers |
N/A |
The origin of the tileset in [longitude, latitude, height] format in EPSG:4326 coordinates and height in meters. This value is ignored if the source data already contains georeferencing information. |
Imagery
An imagery tileset can be created by specifying --input-type RASTER_IMAGERY
and --output-type IMAGERY
.
The following formats are supported:
-
GeoTIFF (.tiff, .tif)
-
Floating Point Raster (.flt)
-
Arc/Info ASCII Grid (.asc)
-
Source Map (.src)
-
Erdas Imagine (.img)
-
USGS ASCII DEM and CDED (.dem)
-
JPEG (.jpg, .jpeg)
-
PNG (.png)
-
DTED (.dt0, .dt1, .dt2)
Rasters must be orthorectified and contain a coordinate reference system. Sidecar files such as .aux.xml, .tab, .tfw, .wld, .prj, .ovr, .rrd, etc… will be automatically detected and used.
There are no imagery-specific command-line options.
Terrain
A terrain tileset can be created by specifying --input-type RASTER_TERRAIN
and --output-type TERRAIN
.
The following formats are supported:
-
GeoTIFF (.tiff, .tif)
-
Floating Point Raster (.flt)
-
Arc/Info ASCII Grid (.asc)
-
Source Map (.src)
-
Erdas Imagine (.img)
-
USGS ASCII DEM and CDED (.dem)
-
JPEG (.jpg, .jpeg)
-
PNG (.png)
-
DTED (.dt0, .dt1, .dt2)
Rasters must be single band floating point or integer values. They must also be orthorectified and contain a coordinate reference system. Sidecar files such as .aux.xml, .tab, .tfw, .wld, .prj, .ovr, .rrd, etc… will be automatically detected and used.
Terrain-specific command-line options:
option | value | default | description |
---|---|---|---|
--height-reference |
|
N/A |
By default, the source data’s vertical datum is used as the base height which elevation values are relative to. Specifying this property will override that behavior. If not specified and no vertical datum is available, Set to |
--to-meters |
number |
N/A |
By default, the source data’s vertical datum is used. When specified, overrides the units of the vertical axis and provides the constant scale factor to apply to input elevation values to convert them to meters. Setting this property is only useful in the rare case that the vertical axis has different units than the horizontal axis. For example, if the data is in feet and no vertical datum is specified, |
--water-mask |
boolean |
|
Setting this value to |
--base-terrain |
string |
N/A |
By default, any area of the earth not covered by the provided terrain will have an elevation of mean sea level. By specifying the path of an existing terrain tileset, the new terrain will be placed on top of the referenced terrain to create a new derived dataset. Void values in the source terrain will also be blended with the existing underlying terrain data. |
CityGML
A 3D Tiles tileset can be created from one or more CityGML files (.citygml, .xml, .gml) by specifying --input-type CITYGML
and --output-type 3DTILES
. CityGML 3.0 is not yet supported.
CityGML-specific command-line options:
option | value | default | description |
---|---|---|---|
--geometry-compression |
|
|
Controls the type of compression applied to geometry when creating a 3D Tileset. |
--disable-colors |
boolean |
|
When set to |
--disable-textures |
boolean |
|
When set to true, the tiler ignores texture information and uses the underlying geometry color. |
--clamp-to-terrain |
string |
The path to terrain tileset to use when clamping data, such as Cesium World Terrain. If specified without an argument, mean sea level (EGM96) will be used. When terrain clamping is enabled, CityGML will adjust the height of the following object types so that they lay flat on the terrain: |
KML/COLLADA
A 3D Tiles tileset can be created from one or more KML files (.kml, .kmz) with associated COLLADA (.dae) models by specifying --input-type KML
and --output-type 3DTILES
.
KML tiling does not support the full KML specification. It will process Model elements inside of an associated Placemark . Any Placemark metadata will also be included in the 3D Tiles output.
|
KML-specific command-line options:
option | value | default | description |
---|---|---|---|
--geometry-compression |
|
|
Controls the type of compression applied to geometry when creating a 3D Tileset. |
--clamp-to-terrain |
string |
The path to terrain tileset to use when clamping data, such as Cesium World Terrain. By default, mean sea level (EGM96) will be used. When terrain clamping is enabled, the height of the models will be adjusted to lay flat on the terrain. |
Monitoring Progress
When specifying a --progress-url
parameter, the tiling pipeline will POST progress updates to the provided URL as a JSON object with the following shape and properties:
{
jobId,
progress: {
percentComplete,
status,
message,
errorJson,
}
}
property | type | description |
---|---|---|
jobId |
|
A unique identifier for the job |
progress.percentComplete |
|
A numeric value from 0 to 100 or undefined in the event of an error |
progress.status |
|
One of |
progress.message |
|
When status is |
progress.errorJson |
|
When status is |
The --progress-url
option will preserve query parameters so that you can encode other data you need into the URL, such as a database identifier associated with this data or API token.
Asset server reference
Installing Global 3D Content
The Cesium ion asset server can host Cesium’s curated Global 3D Content such as Cesium World Terrain, OSM Buildings, and Sentinel-2 imagery. Copy the relevant file into the directory being served by the asset server. You can rename the file to whatever you would like and the name will be the tileset identifier. The asset server will auto-detect the database type and host it under the correct route.
For example, if you rename Cesium World Terrain to cwt
, it will be available at /v1/terrain/cwt/
and you can browse the layer.json file by visiting /v1/terrain/cwt/layer.json
. You can also use Cesium World Terrain with the --clamp-to-terrain
option of the 3D tiling pipeline.
Deploying to production
If you plan on running the asset server in a production environment, make sure your ingress, load balancer, or other content distribution mechanism sets appropriate cache headers for your use case. Because a content caching policy is highly dependent on specific use cases, the asset server does not set any cache headers when in production mode. This means the default behavior is to cache forever, which is probably not what you want.
Log levels
The levels on the logs produced by the API server are in increments of 10. The following mapping shows the levels and what they represent:
10: TRACE,
20: DEBUG,
30: INFO,
40: WARN,
50: ERROR,
60: FATAL
API Reference
If you are using official Cesium clients, there is typically no need for you to interact directly with the API routes created by the server. However if you are implementing custom processes that rely on 3D tiling pipeline output or implementing your own client, refer to the below documentation for retrieving individual tile data.
3D Tiles
Cesium ion serves 3D Tilesets that conforms to the 3D Tiles specification.
GET /v1/3dtiles/{tilesetId}/tileset.json
Retrieves the root tileset JSON for tilesetId
.
Name | Description |
---|---|
tilesetId |
The tileset identifier on which to perform the operation. This is a url-encoded file path relative to |
Code | Content-Type | Description |
---|---|---|
200 OK |
application/json |
The contents of the tileset.json |
404 Not Found |
application/json |
A JSON object of the format: |
GET /v1/3dtiles/{tilesetId}/{pathFragment}
Retrieves 3D Tiles content from the tileset.
Name | Description |
---|---|
tilesetId |
The tileset identifier on which to perform the operation. This is a url-encoded file path relative to |
pathFragment |
The path into the 3D tileset. This is a path fragment and not a single identifier. |
Code | Content-Type | Description |
---|---|---|
200 OK |
Varies based on content type of the 3D Tiles resource |
The contents of the file |
404 Not Found |
application/json |
A JSON object of the format: |
Imagery
Cesium ion serves imagery that conforms to the TileMap Service specification (TMS).
GET /v1/imagery/{tilesetId}/tilemapresource.xml
Retrieves the TileMap resource associated with the provided tileset
Name | Description |
---|---|
tilesetId |
The tileset identifier on which to perform the operation. This is a url-encoded file path relative to |
Code | Content-Type | Description |
---|---|---|
200 OK |
application/xml |
The TileMap resource for this tileset |
404 Not Found |
application/json |
A JSON object of the format: |
GET /v1/imagery/{tilesetId}/{zoomLevel}/{tileColumn}/{tileRow}.(jpg|png)
Retrieves the TileMap resource associated with the provided tileset
Name | Description |
---|---|
tilesetId |
The tileset identifier on which to perform the operation. This is a url-encoded file path relative to |
zoomLevel |
The zoom level |
tileColumn |
The "x" tile coordinate. |
tileRow |
The "y" tile coordinate. |
Code | Content-Type | Description |
---|---|---|
200 OK |
image/jpg, image/png |
The image at the provided coordinates |
404 Not Found |
application/json |
A JSON object of the format: |
Terrain
Cesium ion serves terrain in the quantized-mesh-1.0 terrain format.
GET /v1/terrain/{tilesetId}/layer.json
Retrieves the layer JSON for tilesetId
.
Name | Description |
---|---|
tilesetId |
The tileset identifier on which to perform the operation. This is a url-encoded file path relative to |
Code | Content-Type | Description |
---|---|---|
200 OK |
application/json |
The contents of the layer.json |
404 Not Found |
application/json |
A JSON object of the format: |
GET /v1/terrain/{tilesetId}/{zoomLevel}/{tileColumn}/{tileRow}.terrain
Retrieves a quantized-mesh terrain tile
Name | Description |
---|---|
tilesetId |
The tileset identifier on which to perform the operation. This is a url-encoded file path relative to |
zoomLevel |
The zoom level |
tileColumn |
The "x" tile coordinate. |
tileRow |
The "y" tile coordinate. |
Code | Content-Type | Description |
---|---|---|
200 OK |
application/vnd.quantized-mesh |
The terrain tile and optional extensions. |
404 Not Found |
application/json |
A JSON object of the format: |
Health check
GET /health
The health check can be used to ensure the server is running. This route always returns 204 No Content if the server is reachable.
Third-party software
Cesium ion makes use of the following third-party software.
Name |
License |
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
MIT |
|
(CC-BY-4.0 AND MIT) |
|
(CC-BY-4.0 AND MIT) |
|
(CC-BY-4.0 AND MIT) |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
ISC |
|
Apache-2.0 |
|
MIT |
|
Apache-2.0 |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
MIT |
|
MIT |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
MIT |
|
BSD-2-Clause |
|
MIT |
|
MIT |
|
MIT |
|
BSL-1.0 |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
bzip2 |
|
MIT |
|
Apache-2.0 |
|
Apache-2.0 License |
|
MIT |
|
(BSD-3-Clause AND Apache-2.0) |
|
MIT |
|
MIT |
|
MIT |
|
BSD-3-Clause |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
BSD-3-Clause |
|
MIT |
|
MIT |
|
MIT |
|
BSL-1.0 |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
(MPL-2.0 OR Apache-2.0) |
|
BSD-3-Clause |
|
Apache-2.0 |
|
Apache-2.0 |
|
Apache-2.0 |
|
MIT |
|
MPL-2.0 |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
BSD-3-Clause |
|
Apache-2.0 |
|
MIT |
|
MIT |
|
MIT |
|
SIL OPEN FONT LICENSE Version 1.1 |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
SGI FREE SOFTWARE LICENSE B v2.0 |
|
MIT |
|
BSD-3-Clause |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
Intel ® Architecture Instruction Set Extensions and Future Features |
MIT |
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
LGPL-2.1 |
|
SIL OPEN FONT LICENSE Version 1.1 |
|
Apache-2.0 |
|
LGPL-v2.1 |
|
MIT |
|
MIT |
|
MIT |
|
IJG,BSD-3-Clause,Zlib |
|
MIT |
|
libpng-2.0 |
|
MIT |
|
BSD-3-Clause |
|
MIT |
|
BSD-3-Clause |
|
ISC |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
MIT |
|
SIL OPEN FONT LICENSE Version 1.1 |
|
OpenSSL |
|
MIT |
|
(MIT AND Zlib) |
|
BSD-3-Clause |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
CC-BY-4.0,CC-BY-SA-4.0,CC0-1.0,Data licence Germany - attribution - version 2.0,OGL-Canada-2.0,Open License France,Public domain,BSD-2-Clause |
|
BSD-3-Clause |
|
Qhull |
|
ISC |
|
BSD-3-Clause |
|
ISC |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
ISC |
|
MIT |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
MIT |
|
MIT |
|
Unlicense |
|
Unlicense |
|
MIT |
|
MIT |
|
BSD-2-Clause |
|
MIT |
|
MIT |
|
BSL-1.0 |
|
MIT |
|
MIT |
|
MIT |
|
Apache-2.0 |
|
MIT |
|
Apache-2.0 |
|
MIT |
|
MIT |
|
BSD-3-Clause |
|
Unlicense |
|
ISC |
|
MIT |
|
Zlib |
|
BSD-3-Clause |
|
MIT |
Changelog
1.4.0 2024-10-03
Additions 🎉
-
Exposed the
input-crs
option on the point cloud tiler and the reality tiler which can be used to provide or override a CRS embedded in the data. -
Added ability to duplicate entire Stories. When viewing the Stories list, click Duplicate on a story tile.
-
Added ability to duplicate individual story slides. In a story, right click on the slide thumbnail to access the Duplicate slide menu option.
-
Added ability to link to individual story slides.
-
Reality tiler performance improvements.
Fixes 🔧
-
Minor bug fixes
1.3.0 2024-08-02
Additions 🎉
-
Added the ability to add Google Photorealistic 3D Tiles and Bing imagery assets to ion Self-Hosted by connecting an ion SaaS account to Self-Hosted.
-
Connecting an ion SaaS account to Self-Hosted also adds geocoding to the user interfaces for Clips, Stories, and the Location Editor for My Assets. Geocoding is accessible through the ion Self-Hosted API for geocoding features in solutions you develop.
-
Added bash scripts for running the tiling pipeline via command line as an alternative to running it in Docker.
-
In the tiling command line for 3D_CAPTURE and 3D_MODEL source data, exposed the
input-up-axis
option. When set, it overrides the model’s default up axis and treats the given axis as up. -
Added the ability to create Stories for the moon if Cesium Moon Terrain (CMT) asset is available. Contact Cesium for CMT licensing details.
-
The Reality Tiler was updated with significant performance improvements for large models.
Fixes 🔧
-
Fixed multiple issues with job management. Under certain conditions these issues caused tiling jobs not to run, caused failed jobs to not appear as failed in the ion user interface, and prevented failed jobs from being canceled after the asset was removed.
1.2.2 2024-07-15
-
Security updates
1.2.1 2024-07-09
Fixes 🔧
-
Security updates
-
Minor bug fixes
1.2.0 2024-06-10
Additions 🎉
-
Added the ability Clips support to the frontend application and API.
-
Added to download tiled assets as a zip file. This must be enabled under the
features
section ofvalues.yaml
. -
Added the ability to import source data from S3 and export tiled asset data to S3. This must be enabled under the
features
section ofvalues.yaml
. -
The buildings tiler for KML and CITYGML data provides improved performance, best effort tiling and improved logging.
Fixes 🔧
-
Fixed an issue with statically hosted assets (GLTF, CZML, KML & GEOJSON) where they were not being deleted by the maintenance script.
-
Fixed uploading 3D Tiles as a pile of files.
-
The size of the cesium-ion-tiling image has been reduced by ~0.5 GB.
-
Fixed the response mime type hosted imagery.
-
Fix draco compression bug causing failures in the buildings tiler.
1.1.1 2024-04-23
This is a patch release to fix the inability to tile data in Kubernetes environments without a full cgroup implementation, such as some Windows Subsystem for Linux (WSL) configurations. See Upgrading from a previous release for instructions on the upgrade process.
1.1.0 2024-04-15
This is a maintenance release with many minor bugfixes and improvements for scalability and performance. The most notable items include:
Breaking Changes 📣
-
The
assetServer
health check now returns a status of200
instead of204
to be compatible with ingresses that require it, such as Google Cloud Platform. If you configured an exact check for 204 in your ingress, you must update it to look for 200 instead.
Additions 🎉
-
Added a
strategy
section toassetServer
,apiServer
andfrontendServer
sections to select betweenRecreate
andRollingUpdate
deployment strategies. TheRecreate
deployment strategy is used by default to simplify and reduce required resources for local development. -
Added
activeDeadlineSeconds
totilingJob
section ofvalues.yaml
. Jobs will be killed if they run longer than the number of seconds provided by this value. The default value is no timeout. -
Added documentation for installing Cesium World Bathymetry
-
Point cloud tiling now supports and preserves all point record format field data. See our blog post for more details.
-
Improved performance and reduced memory footprint for tiling certain classes of reality models.
Fixes 🔧
-
Fixed a bug where ion would indicate that a tiling job was done before it was actually complete.
-
Imagery tiling now handles partially corrupted GeoTIFFs and will log a warning instead of failing when they are encountered.
-
Tiling KML buildings will no longer fail when encountering a missing or invalid model and will instead log a warning to the console.
-
Fixed an issue when tiling reality models where floating point conversion could cause the tiler to abort operation.
-
Fixed an issue when tiling reality models where missing texture files could cause the tiler to abort operation.
-
Fixed an issue when tiling reality models that caused occasional texture artifacts.
1.0.0 2023-12-12
-
Initial release