26th Jun, 2021Gareth Simons

This is a retroactive post. The original work was done around 2014-2016, mostly during the course of 2015.

Early renditions of the CASA ViLo 3D model

After finishing my MRes at CASA, I spent a little more than a year hanging around to work on an exciting project. On the one hand, we had heaps of data from sources such as Ordnance Survey MasterMaps and the Environment Agency’s LiDAR data. On the other, we wanted to interactively visualise this information in immersive 3D environments such as Unity3D.

The challenge at the time was that off-the-shelf solutions for piping raw sources of geometrical data straight into highly configurable visualisation front-ends did not yet exist, at least not in a form suitable for our purposes. For emphasis, we were aiming for a dynamic, not static, workflow. Static workflows are comparatively easy. You grab a bunch of data from somewhere, pipe it through some or another GIS platform such as QGIS, potentially mix in one or two other steps via Blender or a bit of customised Python code, drop the geometry into Unity3d, and off you go. You can then style this geometry to your heart’s content and wow your audience. Still, the information remains static, meaning that you have to repeat this process every time you want to apply the logic to another location.

What we were working on was dynamic. We had a PostGIS database full of the national-scale data mentioned above, and we wanted to see this appear in real-time so that we could instance a model for any given location in England and Wales. We used LiDAR data to derive terrains and building heights; triangulated and cached the geometry (Redis, in-memory DB); then streamed this information on-demand to front-end visualisation clients where the 3D content was procedurally generated.

Whereas Unity3d was deemed a suitable first use case, the server-side API was designed to be sufficiently generic to work with other front-ends such as the browser-based Three.js or otherwise blended with online mapping frameworks. There are three things about this project that still stand out to me:

  • The implementation made extensive use of Python’s then-new asyncio module. This made for an initially steep learning curve but combined nicely with frameworks such as sanic or gunicorn that made for potent I/O throughput.
  • The project eschewed the tile-based approach that is prevalently used by online mapping platforms, instead opting for a layer and element-based strategy. This paradigm allows for more intuitive workflows combined with independently configurable levels of details and spatial extents while avoiding the ever-frustrating behaviour associated with tile boundaries, i.e. chopping up building geometries at tile edges.
  • The project was packaged using modular components per docker, which was also relatively new for the time. This design made it easier to instance or deploy the stack on any number of hosts and for these deployments to scale on demand.

Even though the ship has now sailed, the project is still an interesting exercise in the asynchronous streaming of information from GIS backends to interactive visualisation environments. I have some new thoughts on how I’d approach such a system given what I know now, but I’ll keep these to myself for the time being :).

Bonus videos from this time at CASA