Shaded Relief using Skymodels, courtesy of Raster Chunk Processing

A couple of weeks ago I watched some excellent presentations from the How To Do Map Stuff day organized by Daniel Huffman. One that I particularly enjoyed was Jake Adams’s talk on building shaded relief out of hillshades. Toward the end of his talk he brought in something called a skymodel.

In this post, I’ll explain what Skymodels are, and how to get Jake’s Raster Chunk Processing software running in a linux environment so you can use the Skymodel feature to make your own versions of this unique kind of shaded relief.

Skymodels were introduced in a paper by Pat Kennelly and James Stewart in 2014. The basic idea behind the skymodel is that, in the real world, we see terrain illuminated by light coming from all parts of the sky in various amounts, not just from the sun. The usual hillshading algorithm, on the other hand, calculates what’s illuminated and what’s in shadow based entirely on light coming from a single point.

An overcast sky is one example of a skymodel: in the overcast sky, light comes mostly from above, but not from any particular direction.  In James Stewart’s SkyLum software, this is represented by the redder, “hotter” dots in the dome of the model, while we have yellow and then cooler blue dots down toward the horizon representing less light coming in from those directions.

skymodel_overcast

Contrast that with this model for what SkyLum calls a Type 6 sky, “partly cloudy, no gradation towards zenith, slight brightening towards the sun.”

Type 6 partly cloudy, no gradation towards zenith, slight brightening towards the sun

Or this, the Type 13, “CIE standard clear sky, polluted atmosphere.”

type 13 CIE cstandard lear sky, polluted atmosphere

Each of these will produce a different kind of shaded relief.

Baltoro_assortment
The Baltoro glacier, Pakistan. Left: overcast. Middle: type 6, “partly cloudy, no gradation towards zenith, slight brightening towards the sun.” Right: type 13, “CIE standard clear sky, polluted atmosphere.”

In Blender you can position multiple light sources, but SkyLum writes a CSV file (which I will call the illumination file) that defines 250 different light sources and assigns a relative weight to each. This is new level of complexity in lighting.

180,45,0.000962426
175,37,0.000929834
193,47,0.00118523
187,38,0.000949383
167,47,0.00143157
181,54,0.00168069
169,41,0.00141631
...

Each line in the illumination file above gives the azimuth, elevation, and weight for one illumination source. The weights for all 250 points will add up to 1.

So how does one use this to generate shaded relief?

Well, Jake Adams (thank you, Jake!) has written a clever piece of code called Raster Chunk Processing (RCP, hereafter), which he presents in a three-part blog post. RCP divides up large DEMs into smaller tiles (“chunks”), each of which is processed separately. All of the results are then merged back together for a final result. The point of RCP is that it allows you to work with DEMs that ordinarily would max out your RAM and cause processing to grind to a halt.

This is similar to the way I have divided large DEMS into tiles for processing by Blender, but Jake’s RCP code allows you to use this “divide-and-conquer” strategy to do a whole host of things. One of them is to build a skymodel hillshade, given a DEM and an illumination file from SkyLum.

The RCP code is written in python, which is platform independent, so although Jake gives us instructions for installing it under Windows, we can get it running under linux with just a few changes.  In this case I did this on a system running Ubuntu 18 Bionic, and since I was using this machine for mapping I already had QGIS and GDAL installed.

To begin, head over to Jake’s Git repository, and download the RCP software using the Clone or Download button. Once unzipped, this will produce a directory called rcp-master. Within it you will see, among other things, the main program (raster_chunk_processing.py), a subdirectory called SkyLum (which contains the SkyLum program) and a couple of other python files that will be important to us, like settings.py and methods.py.

RCP is written to be run under python 3, so from a Ubuntu point of view these are the packages you will need to install:

  • python3-numba
  • python3-astropy
  • python3-gdal
  • python3-skimage
  • python3-numpy

(Note that python3-scimage replaces the module Jake calls scikit-image.)

Another thing that RCP needs is a working copy of mdenoise, the software that implements Sun’s algorithm for denoising topographic data. Or rather, RCP needs to at least think it has a copy. So you have a choice: if you want to be able use RCP to denoise DEMs, you should compile yourself a copy of the mdenoise binary; there are instructions here, and it’s not too painful. If not, just use sudo to place an empty file called mdenoise in /bin.

Then use a text editor on the file settings.py to alter its single line about where the mdenoise executable is.

MDENOISE_PATH = '/bin/mdenoise'

The last piece you have to put in place is a way to run SkyLum. SkyLum only comes as a Windows binary, SkyLum.exe, so to run this you need to have wine installed. The good news is that SkyLum runs quite well under wine.

Right-click SkyLum.exe, and choose Open With… >Other Application. In the list of application choose, “winebrowser.”

winebrowser

SkyLum should open right up and show you a piece of terrain with a sky dome over it.

fresh skylum

The complete instructions for SkyLum are in the README file included with it, but I will give a summary here of what I find useful:

  • By default when you open SkyLum the sun (sun position is shown at the bottom) is at 45° elevation and azimuth 180° (due South). 0° is north, and azimuths increase clockwise, as is standard
  • Hit ? for a help screen
  • Move the sun with the arrow keys.
  • Rt-click and choose an illumination model. See Kennely and Stewart’s paper for more on these.
  • Hit p to have SkyLum.exe calculate points after you have positioned the sun and chosen a skymodel
  • Hit o to have SkyLum write a sky model file. It’s conventional to give it a .csv extension.
  • Use a text editor to delete the header lines. All you want left are the comma-separated lines with the azimuth, elevation and weight, as shown above.

Now, how to run RCP?

Let’s assume you have a DEM  called myDEM.tif, and an illumination file you made with SkyLum called illum.csv. You’ve already deleted the header lines from illum.csv. You want to divide your DEM into 1000x1000px chunks, with an overlap of 200 px. We’ll also assume you have 4 processors and you want to use 3 of them for this operation.

Drop both myDEM.tif  and illum.csv in the rcp_master directory. Open a terminal there.

The general form of the RCP command line is:

python3 raster_chunk_processing.py -m [method] {general options} {method-specific options} input_file output_file

Notice that the first element in the command line is python3, not just python.

For my skymodel in this case the command line will be…

 python3 raster_chunk_processing.py -m skymodel -s 1000 -o 200 -p 3 -l illum.csv --verbose myDEM.tif myDEM_skymodel.tif

What you’ll see in the terminal is a bunch of information about the job to be done, and then you’ll see RCP submitting the sub-jobs (the chunks) to be skymodelled. You can walk away, or watch, fascinated, as the chucks get worked on…

Preparing output file myDEM_skymodel.tif...
Output dimensions: 4046 rows by 4515 columns.
Output data type: <class 'numpy.float32'>
Output size: 69.7 MiB
Output NoData Value: -32767.0

Processing chunks...
Tile 0-0: 1 of 25 (4.000%) started at 0:00:00.701178 Indices: [0:1200, 0:1200] PID: 8777
Tile 0-1: 2 of 25 (8.000%) started at 0:00:00.701253 Indices: [0:1200, 800:2200] PID: 8778
Tile 0-2: 3 of 25 (12.000%) started at 0:00:00.701284 Indices: [0:1200, 1800:3200] PID: 8779

All of the switches and parameters are explained quite well in Jake’s post. My additional notes are:

  • the input file is generally a geotiff, but it can also be a TIF/TFW pair. It has to have a NoData value defined or you will get an error No NoData value set in input DEM.
  • RCP will exit with an error if the output file already exists
  • By default RCP applies a vertical exaggeration of 5X to the DEM, because this was what Kennely and Stewart did in their paper. However, you can change this if you would prefer a different vertical exaggeration. Open methods.py in your text editor and go to line 272. This line originally says  in_array *= 5. You can change that 5 to whatever number you would prefer. (No recompile necessary.)
  • the overlap (-o) parameter is based on how far you think shadows may stretch. The shadowing algorithm checks outward for 600 pixels to see if a given point is shadowed by anything. For this reason, there is no point in making overlap larger then 600.
  • the chunk size parameter (-s) is based on how much RAM each process requires while running. You can experiment with this and watch on the system monitor to see how close you are to spilling over into swap.
  • When shadows are being calculated in the skymodel,

The output file can be dragged into QGIS, where I often find I want to increase brightness.

adjusdting_brightness
Left: hillshade image fresh out of RCP. Right: with brightness +100

Another idea that seems to produce some nice results is to combine two or more skymodels with different transparencies and styling. You might also want to check out a gallery of the results of applying all of the different skymodels in SkyLum to the same piece of terrain.

Now that you have RCP running, of course you’ll need to try all of the different skymodels on your favourite DEM to see which you like best. But it’s also worth checking out the other kinds of processing RCP can do on DEMs. It is a very fast conventional hillshader (method hillshade). It runs a nice, quick gaussian blur (method blur_gauss), much faster than the SAGA Gaussian filter module in QGIS. And I haven’t even tried the Clahe and TPI processing yet!

 

 

 

The Hollow Plain of Ka’ra

I learned about the existence of  the hollow plain of Ka’ra, in Iraq, when I was reading Gertrude Bell’s letters.

On the 10th of February, 1911, Gertrude, who is forty-two years old, sets out across the Syrian desert from Damascus to go to Hit, some 600 km east, on the Euphrates River. Both of these cities are, at this time, in the Ottoman Empire, so there are no international borders to be crossed.

She begins on a horse.

I rode my mare all day, for I can come and go more easily upon her, but when we get into the heart of the desert I shall ride a camel. It’s less tiring. (Feb 10)

Not alone, Gertrude (who is fluent in Arabic) is in part of a party of fifteen, some of whom are her employees. She describes them as…

myself, the Sheikh, Fattuh, ‘Ali and my four camel men, and the other seven merchants who are going across to the Euphrates to buy sheep.

For much of this journey they are outside of the zone of Ottoman control.

In half an hour we passed the little Turkish guard house which is the last outpost of civilization and plunged into the wilderness.

Their exact route is not easy to trace from her letters—she does not give many landmarks—but on February 16th she reports that

We came to the end of the inhospitable Hamad today and the desert is once more diversified by a slight rise and fall of the ground. It is still entirely waterless, so waterless that in the Spring when the grass grows thick the Arabs cannot camp here.

She uses the term Hamad to denote the core of the Syrian desert, the highest, flattest part—although other writers call the entire the Syrian Desert the Hamad. On the next day (17th) she writes that they have deviated from their route, which, up to this time, had been almost due east.

So it happened that we had to cut down rather to the south today instead of going to the well of Ka’ra which we could not have reached this evening… the whole day’s march was over ground as flat as a board, flatter even than the Hamad…We had a ten hours march to reach the water by which we are camped.

The 18th…

we got off half an hour before dawn and after about an hour’s riding dropped down off the smooth plain into an endless succession of hills and deep valleys – when I say deep they are about 200 ft deep and they all run north into the hollow plain of Ka’ra.

This is the last we hear about the hollow plain of Ka’ra, which apparently has a series of north-flowing canyons running into it from the south. On the 20th they arrive at the ruins of Muḩaywir in the Wādī Ḩawrān.

We rode today for 6 and a half hours before we got to rain pools in the Wady Hauran, and an hour more to Muhaiwir and a couple of good wells in the valley bed.

The Wādī Ḩawrān and Muḩaywir are not difficult to locate. They show up on this 1959 Times Atlas map of the Middle East, along with the sites she now visits on her way to Hit: Amij, Khubbaz and Kubeisa.

Times_atlas_markup

But Ka’ra (or “Kara” as it is spelled in the printed edition of Bell’s letters, rather than the online archive of her diaries and letters) is not there.

Now, if you’re quicker than me, you probably already picked up that on this map, just to the west of Wādī Ḩawrān, there is a “Jumat Qa’ara”—and reasoned that this might be what Gertrude referred to as the hollow plain of Ka’ra. But I missed that, and began a pointless search of Google Maps, OpenstreetMap, geonames.org and Wikipedia for something called the Ka’ra.  (There is a Wikipedia entry for “Kara Depression,” but this is a Kara Depression in northern Russia.)

I did, however, noticed on the shaded relief of OpenTopoMap that 40 km west of Muḩaywir there was a 50-km-wide depression with a series of canyons flowing into it from the south. Was this the “the hollow plain of Ka’ra?”

opetopomap_annotated

But this feature goes unnamed on online mapping sites.

This shows the weakness of much of online mapping: it is point-based. Area features, which are readily labelled on what we can call “static maps” (maps designed to be printed, or to be a single image you can’t zoom in on) do not make it into the database that underlies slippy maps. Neither do linear features, like rivers. I do not know why OSM, Google et al. try to make everything into points, but points dominate online mapping.

(As an amusing exercise, try typing “Yangtze River” in the search box on Google Maps. You don’t get a very satisfying result.)

 

However, by luck I found an article by another famous British archaeologist,  Sir Aurel Stein, written some twenty-nine years later. He was writing about his search for Roman forts along the line from Hit to Palmyra. A lot had changed. World War I had happened; the British had created the mandate state of Iraq; they had built the pipeline to carry the oil from Kirkuk to the Mediterranean, and put pumping stations along it; aircraft were in common use. Stein wrote…

after gaining the pipe-line station H2 for a base, we resumed the survey of the ancient trade route which had led from Hit to Palmyra. I was able to recognize its line clearly both from the air and on the ground also over long stretches right up to where Pere Poidebard had before determined its continuation beyond the Syro-‘Iraq frontier. The line proved to have led with characteristic Roman straightness right across the wide sandy depression of Qa’ara, and not as had been supposed before past the ruins of Qasr Helqum.

Ah, “the wide sandy depression of Qa’ara!” And on his map, there it is, just north of the H2 pumping station, and northeast of Mlosi, labelled as “Al Qa’ara”.

Stein map 1940

Ironically, the online mapping sites do not even show the H2 pumping station and airfield, although it used to be a standard feature on static maps, like this National Geographic 1960s map of the Middle East.

NG Middle East 1970 detail

Mlosi, which Stein also called the “well of Mlosi,” is shown on the National Geographic map as Bi’r al Mulusi, and is probably the same as the Ābār al Malūsī (ابار الملوسي) located by geonames.org at N 33°29′48″ E 40°06′14.  (Ābār being the plural of Bi’r, a well.) Qasr Helqum would be Qaşr al Ḩalqūm which geonotes.org places visibly on the north rim of the depression.

Google Hybrid with annotation
Google Hybrid of the hollow plain of Ka’ra/Al-Qa’ara, with labels added manually

Bell doesn’t indicate whether she had heard of the Ka’ra before, but Stein refers to “the wide sandy depression of Qa’ara” as if it is well-known. How did he learn about it? What maps was he using? It would also be nice to know how it is spelled in Arabic, so we could know how it should be transliterated to the Latin alphabet.

 

GE view of Qa'rah
Al Qa’ara as seen on Google Earth, looking southeast, with the “deep valleys” running “north into the hollow plain.”

Well, it turns out, if I look at more static maps, the hollow plain is consistently labelled.

The 1986 Soviet 1:200,000 scale topographic map I-37-23, has “ВПАДИНА КААРА” ВПАДИНА is a depression.

Kaara on Soviet 200K I-37-23

A 1944 by British Naval Intelligence, from the Perry-Castaneda Library: shows it as “JUMAT QAARA.” (And this is what I see, looking back at the Times Atlas, above.)

1944 Naval Intelligence western Iraq detail

And the 1942 map from the US Army Map Service (NJ-43-11 “Rutba”), calls it JUMAT AL QAARA

1942 AMS quarter inch I-37 Q Rutba detail

There are also a number of recent geological papers about this feature. Mustafa and Tobia have one called Modes of Gold Occurrences in Ga’ara Depression, Western Iraq, in the Iraqi Bulletin of Mining, 2010. Their map leaves no doubt that the Ga’ara Depression is the same as Bell’s Ka’ra and Stein’s Al Qa’ara.

Mustafa and Tobias map

The paper’s abstract is in Arabic, so we can see how they render “Ga’ara Depression” in Arabic.

Tobia article titles

“In Ga’ara Depression” is in_ga'ara_depression_ar, which uses the unusual letter gaf-with-line (گ), a variant on kāf (كـ ) which is regularly used in Persian, a language that has a /g/ sound. منخفض Munkhafidun is a depression. 

It’s a confusing variety of transliterations: Ka’ra, Qa’ara, Ga’ara. What helps make sense of it is that (as I learn at https://en.wikipedia.org/wiki/Varieties_of_Arabic), the letter  ق, pronounced /q/ in classical Arabic, has become a /g/ in both the Iraqi and Nejdi dialects. Typically they still spell these words with a  ق  (as in Qa’ara), but sometimes they are using the gaf-with-line (as in Ga’ara).

Poking through an Arabic-English dictionary I see that the Q-‘-R triliteral root in Arabic means to be deep, or hollowed out. So, ironically, Qa’ara may simply mean the deep, hollowed out place. This may be why Gertrude Bell called it “the hollow plain of Ka’ra.”

So, now I know where this hollow, sandy plain is, but I am left with one mystery that I can’t solve. On the maps where it is labelled Jumat Al Qaara, or Jumat Qa’ara, what does Jumat mean? Jum’ah (جمعة) is the word for Friday, but it seems unlikely that this is the Friday of Hollows. The J-M-‘ root means to gather or collect, so conceivably this is the Collection of Hollows? I could not find other Jumats to compare this to. Any ideas from Arabic speakers would be welcome.

 

 

 

How do we understand the size of Syria?

Syrian_comparison_inset_2004_CIA

This inset appears on a CIA map of Syria from 2004. We can assume it’s meant to give the map reader a sense of the size of Syria by comparing it to a region that he or she is familiar with.

I’m fascinated, first, by the assumption that the decision-maker reading the map is located in the mid-Atlantic states, (Washington, presumably). This kind of regionality runs throughout American politics, a geography of where important people live, and where they don’t, that one carries in the memory and consults without even knowing it. 

This is an attempt to make the map *personal* but I almost think it should be captioned, “This might be helpful to you if you happen to live in the northeast.”

The second thing I’m fascinated by is, assuming we have to stick to the northeast USA, would it have been a smarter decision  to have aligned Damascus with Washington, DC? These are the two national capitals. Such a juxtaposition would, in the context of the Syrian Civil War, allow the President to imagine New York no longer doing his bidding, and Providence functioning like an independent state.

CIA inset Washington and Damascus aligned

Perhaps this juxtaposition puts too much of Syria out to sea, but it does put New York City more or less where Raqqa, the ISIS capital, was.

Of course one must always be careful when constructing these comparison maps. In GIS software a polygon (Syria) whose coordinates are in degrees, dragged to another latitude, will be the wrong size. The safer method, which I have used here, is to make separate maps at the same scale of both Syria and the northeast US, and then juxtapose them in a photo editing program (the GIMP, in my case).  Syria is about 780 km from end to end on its long axis, about the distance from Boston to Richmond, Virginia.

This brings up the question of what it means to understand distance. In the original map, what does the distance from, say, Philadelphia to the eastern tip of Tennessee, mean to people living in Washington? I suspect they rarely go very far to the southwest from the city, and when they do they experience slow, twisting roads that have difficulty passing through the Appalachian mountains. So if the Washington-dweller says “It would take me six hours to drive to the tip of Tennessee!” is there anything meaningful at all there for his understanding of Syria?

I kind of prefer this juxtaposition, both for the similarity of desert landscape, and the sense of distance.

CIA inset LA and Damascus aligned

Los Angeles stands in for Damascus here, and Las Vegas finds itself somewhere up on the Euphrates. (“Las Vegas on the Euphrates” is not yet, but may someday be, the tourism slogan of Deir Ez-Zur or Raqqa.) If you are familiar with the distances and deserts of the American southwest, it is remarkable how much smaller Syria looks when shown like this.

Of course, as Appalachian Trail hikers know, there happens to be a town in Virginia called Damascus. It’s actually right there, just across the border from that eastern tip of Tennessee. So perhaps the best juxtaposition of all aligns Damascus, Virginia, with Damascus, Syria.

CIA inset Damascus and Damascus aligned

 

 

Digital Atlas of the Roman Empire

Fans of the Digital Atlas of the Roman Empire basemap (or DARE basemap) may have noticed that it disappeared from Peripleo a few months ago!

Peripleo is the superb mapping site where you can look up the locations of places in the ancient world. It is is absolutely indispensable in those moments when you just can’t remember which modern town occupies the site of ancient Mesembria. Which happens to me a lot. Also Sirmium.

At present when you look at the available layers in Peripleo, The satellite layer (“Aerial”), Openstreetmap (“Modern Places”)  and the empty basemap are there, but no DARE (“Ancient Places”).

peripleo map layers dialogue

Even more worrisome, the old URL for visiting the DARE map where it was hosted at the University of Lund, Sweden, http://dare.ht.lu.se/,  was not responding.

However, DARE is now back! The map has been moved to a new location at the the Centre for Digital Humanities, University of Gothenburg, https://dh.gu.se/dare/.

new DARE

Many thanks to its creator and maintainer, Johan Åhlfeldt.

Better yet, he has also given out the URL for the tile server itself, so if you run QGIS you can now add DARE as a basemap layer in the QuickTileServices plugin.

Here’s how:

Go Web>QuickMapServices>Settings

settings

Go to the Add/Edit/Remove tab.

adEditRemove

Click the ‘+’ button next to My groups, and create a DARE group.

addGroup

Click OK and then click the ‘+’ button next to My Services. First add the name of the service and choose TMS for Type

addService1

then go to the TMS tab and fill in those parameters:

addService2

Click OK and then Save, and your service is set up.

It should now appear on the Web>QuickMapServices menu.

dare example

Also, since tile-based services operate at zoom levels that correspond to a very strange  set of scales (like 1:1,155,583), remember that you can always snap to the nearest tile scale by going Web>QuickMapServices>Set proper scale.

Happy ancient world mapping!

 

Shaded relief with BlenderGIS (2019), part 3

[Back to Part 2]

Why do this?

The shaded relief images (also often called a hillshade) that GIS software produces are pretty good. Here’s one made with QGIS.

QGIS 2-up
The hillshade by itself (left), and blended with hypsometric tinting (right)

Hillshading algorithms typically take three parameters: the elevation of the sun, the azimuth (direction) of the sun, and the vertical exaggeration of the landscape. (60°, 337° and 1X, in this case.) They do some geometrical calculations to figure out the intensity of incident light on all the pieces of the landscape, and then produce a greyscale image. (They do not, typically, however, calculate actual thrown shadows.)

You can tweak the brightness and contrast in your GIS software, or stretch the histogram to your liking. This can do a lot to lighten up the darkest shadows, or to turn the hillshade into a ghostly wash that merely suggests relief without dominating everything else on the map.

Much of the time what we can get out of GIS software meets our needs for a hillshade, which, after all, is not the final map but a merely a layer of the map. (Although occasionally it is the pièce de résistance…)

But perhaps you don’t like the glossy, shiny quality of that hillshade above. You think the mountains look like they were extruded in plastic. They remind you a little too much of Google maps Terrain layer. Maybe you’d like to see actual shadows thrown by precipitous cliffs. Maybe you’d like something that looks more like it was chisled from stone, like this…

Blender 2-up

Or perhaps you are interested in…

Blender 2-up 2 suns
A second sun, shinning straight down to eliminate the darkest shadows
Blender 2-up warm and cool light
A warm (yellow) sun in the northwest, cool (blue) shadows
Blender 2-up denoised
Denoising performed on the image after rendering

This might be why you are investigating Blender.

The big settings that make a difference (besides the conventional settings of azimuth, elevation and vertical exaggeration) are…

  • material
  • multiple lights and their colours
  • amount of light bounce
  • denoising the render

Material

In animation modelling, Material is what reflects, absorbs and scatters light. Blender spends much of its time tracing light rays and deciding how the surfaces they encounter affect them.

With your plane selected, go to the Material tab icon Material tab and hit New. The default material that comes up has these Surface properties. (And this is what Blender used for your basic render.)

Principled BSDF initial properties

Principled BSDF is a sort of super-versatile material that allows you to have all of these properties (subsurface scattering, metallic look, sheen, transmission of light) that in earlier versions of Blender were assigned to specific surface types, like “Diffuse BSDF,” “Glossy BSDF” or “Subsurface scattering.”

(BSDF stands for “bidirectional scattering distribution function.”)

These various surfaces are actually shaders, which are pieces of software that render the appearance of things (and may actually run on your GPU, not your CPU). If you click on “Principled BSDF” next to the word “Surface” a list of all of the possible shaders comes up.

shader list

You can learn a lot more about shaders in the Blender manual, but Blender essentially wants to give you the tools to be able to simulate any material, from water to hair, and some of the effects you can get applying this to shaded relief are pretty weird.

8 kinds of material
The same piece of terrain rendered with different shaders. Top, left to right: Diffuse BSDF with a wave texture, Glass BSDF, Hair BSDF, Principled BSDF. Bottom, left to right: Toon BSDF, Translucent BSDF, Translucent BSDF with Principled volume emission, Velvet BSDF.

The main shader you probably want to play with is the Mix Shader, which allows you to mix the effects of two different shaders. The Mix Shader’s  factor (from 0 to 1) determines how much the results are influenced by the second shader.

Mix Diffuse Glossy
On left, the original render; on right, a Mix shader (Fac=0.2) of Diffuse BSDF (defaults) and Glossy BSDF (IOR=2). This adds just a bit of glossiness to the surface.
Mix Diffuse Toon
On left, the original render; on right, a Mix shader (Fac=0.3) of Diffuse BSDF (defaults) and Toon BSDF (defaults). This brings in bright highlights that tend to wash out flat surfaces.

The other aspect of material that is worth experimenting with is value. Blender’s default material, the Principled BSDF, has a default colour of near-white, and its value is 0.906 (on a scale of 0 to 1).

principled BSDF default base colour

By darkening this you create a more light-absorbing material.

Principled BSDF base colour 0.5 lamp 45 337 str 3 v2
On left, the original render; on right, Principled BSDF (base color value turned down to 0.5).

You might think that the effect of a darker material is just to shift the distribution of pixel values down in the render, and that you could get the same effect by turning down the brightness of the original. But if you look at the histogram of each image you see the pixel values are distributed differently, and even when both are displayed with “histogram stretch,” they are distinct from each other.

base color 0.906 versus 0.5 both histogram stretched
Left: the value of the base color of material (Principled BSDF) is 0.906 (the default). Right: the value of the base color is 0.5. Both are displayed with histogram stretch.

And of course if it works for your map you can give the material real colour. (Don’t forget to save the render as RGB rather than BW). This, however, is very similar to colorizing your shaded relief in GIS software.

sandstone yellow material

Multiple lights and their colours

To add another light to your scene, go Add>Light>Sun. You can also experiment with adding other types of lights: point lights (which shine in all directions from a specific place), spot lights (which send out a specific cone of light) and area lights.

A second light can bring out features in a very nice way.

second sun from the east
On left: vertical exaggeration 2x, one sun in the NNW, elevation 45°, angle= 10°, strength = 3. On right: the same scene plus a second sun in the east, elevation 45°, angle = 1°, strength 0.5.

By giving colour to lights, you can create differential lighting and colour.

Diffuse 2x BSDF default 45 337 str 3 color orange LAMP2 str 0.5 az east color yellow
Same scene as above, but now the sun in the NNW is orange, and the sun in the east is yellow.

There is one more light in your scene that often goes unnoticed, and this is what Bender called the world background colour. You can think of this as a very far away surface that nonetheless does contribute a bit of light to your scene—from all directions. You can see the world colour in action if you render your scene with the sun strength turned down to 0.

world background only
No lights in the scene: mysteriously there’s still something there. This is the effect of the world background colour.

If there are places in your scene that sunlight does not reach, the world background still contributes to their illumination—and colour.

By default the word background colour is 25% grey (value = 0.25), but you can change this to a dark blue if you would like dark blue light to collect in your shadows.

world background blue
World background colour set to blue (Hex 414465), strength 1. Secondary pale yellow (Hex FFE489) light in east, strength 0.5

Amount of light bounce

Part of the charm of Blender is that it calculates how much light reflects off surfaces and then hits other surfaces, which is why even shadowed areas have some light. You can control, however, the number of bounces a light ray has before it expires. This is on the Render tab icon Render tab, in the Light Paths section.

light paths default

By default, the light paths settings are as above. With the material I am using, the Max Bounces for Diffuse materials is the important part, and by default it is 4.

The presets exist icon icon to the right of “Light Paths” indicates that there are presets available. Clicking here reveals three key presets:

  • Direct Light: light gets few bounces, and none off diffuse material
  • Limited Global Illumination: light gets one bounce off diffuse material
  • Full Global Illumination: light gets 128 bounces off diffuse material

Changing among these does not make a huge difference, but in scenes with deep shadows it is visible.

light paths compared
Clockwise from top left, the presets are: Direct Light, Limited Global Illumination, Full Global Illumination, and the default settings. (With vertical exaggeration at 2x)

Denoising the render

Blender offers the post-processing feature of denoising the render. I liken this to a blanket of snow on your landscape: it erases tiny details.

denoise comparison
Left: 2x vertical exaggeration, sun in NNW, elevation 45° strength 3. Right: the same, plus denoising (Strength = 1, Feature Strength = 1)

To activate denoising, go to the View Layer tab icon View Layer tab, scroll to the very last section, Denoising, and check the Denoising box.

denoising panel

I do not find that denoising does much if you keep the default settings. I tend to turn both Strength and Feature Strength up to 1.0.

In conclusion

This is only been the tip of the iceberg. Some things I have not talked about are

  • Applying a subdivision surface modifier to your mesh so that Blender is interpolating and smoothing it on the fly.
  • BlenderGIS’s ability to read your DEM As DEM Texture, which creates a plane with subdivision and displace modifiers instead of a plane whose every vertex corresponds to a DEM cell.
  • How to focus the orthographic camera down on one small part of your DEM where you can do quick test renders while you work out lighting, material, etc.
  • The node editor for complex materials
  • Using a perspective camera to shoot a scene that is not straight down
  • Cutting DEMs into tiles for Blender to work with, when the whole DEM is simply too much for the RAM on your computer.

Have fun exploring Blender’s many features!

Shaded Relief with BlenderGIS (2019), part 2

[Back to Part 1]

In the overall process we are now at step 3, but things will go faster now.

  1. Prepare your DEM
  2. Read the DEM into Blender as DEM raw data build
  3. Adjust Z scaling (vertical exaggeration)
  4. Create and adjust a georef camera
  5. Correct the final pixel dimensions of the output image to match the DEM
  6. Set final image type to be TIFF
  7. Turn the light into a Sun, and adjust its properties
  8. Do a test render
  9. Do a full render

At the end I’ll cover some of the variations on this process and extra tweaks you can do.

Adjust Z scaling (vertical exaggeration)

The first thing to adjust on your plane is what we usually call the vertical exaggeration, but Blender thinks of as the Z scaling.

With the plane selected, go to the Object tab icon Object tab. Here you should see a number of sections, the first of which is the Transform section.

Object Transform section

Under Scale increase the Z value if you want to create vertical exaggeration. You should see your plane change shape.

It’s important to increase Z scaling before setting up the georef camera, because if you increase Z scaling afterwards, you may accidentally have set the camera below the tops of the highest features.

In my case the terrain has plenty of relief, so I’ll leave Scale Z at 1.000.

Create and adjust georef camera

Make sure the plane is still selected (in the Outliner) and go GIS>Camera>Georef.

A new camera (“Georef cam”) will appear in the Outliner. You do not need to delete the old camera.

When the georef cam is selected, then in the 3D Viewport you’ll see something like this (you may have to pull back).

georef cam createdThis is an orthographic camera (meaning, in effect, that all points of its lens look directly down: there is no perspective distortion) placed over the plane.

To check that your camera is properly pointed at your landscape, and sees all of it, hover the mouse over the 3D Viewport, and hit 0 on the numeric keypad (or go View>Viewpoint>Camera). You should see what the camera sees (“camera orthographic view”)

camera orthographic view

Hit NumPad-0 again (or go View>Viewpoint>Camera again) to go back to regular (“User perspective”) view.

Output tab: Correct the final pixel dimensions of the output image to match the DEM, and set final image type to be TIFF

Go to the Output tab icon Output tab, and under Dimensions check the Resolution X and Y that were set up when you created the georeferenced camera. Also note the percentage (%), which is 100% at this point.

output resolution

Change Resolution X and Resolution Y to match the pixel dimensions of your DEM. In my case, the initial DEM was 839 x 702 pixels, so I enter these two numbers for X and Y. With the dimensions of the hillshade matching those of the DEM I can, at the end, apply the DEM’s world file to the Blender output, and georeference it.

It’s possible at this stage in the process to set percentage to something small, like 20%. This way you can do some quick test renders: Blender will produce a rendering whose pixel dimensions are only 20% of the overall Resolution numbers you set. (Before the final render you’ll be back here and set this to 100% again.)

Lower down on the same tab, under Output, set the parameters for the type of image you want.

Output output parameters

I typically set these to File Format = TIFF, Color=BW and Color Depth = 8, but you can get JPG, PNG, etc. If you choose later to assign colour (other than grey or white) to the plane’s material, the world background or the sun, you will probably want to set Color to RGB or RGBA.

Turn the light into a Sun, and adjust its properties

Your plane is built, your camera is set: now all that is left to arrange is the Sunlight.

Select the Light in the outliner, and go to the Light object data tab icon Object Data tab.

Initially you should see something like this under Light:

Light object data Light initial

Click on Sun, and then change Strength to 1.

Click on the Use Nodes button below, and set its Strength to 2. It should now look like this.

Light object data Light final

You will want to play around with the Strength setting (in the Nodes section) when you do test renders. The strength of the light affects how bright your hillshade is.

The Angle setting for the sun is important. The Angle is how many degrees across the sun’s face looks from the ground. A sun with a smaller angle produces shadows with sharper edges.

sun angle comparison
Sun angle of 1° (left) and 10° (right)

A 1° sun is essentially a point source, and it casts sharp shadows. A 10° sun is bigger and the edges of shadows are diffuse. Take your pick.

Note that if you decide to use a coloured sun later (see “Part 3“) this is where you will change its colour: Light>Object Data>Nodes>Color.

Now go to the Object tab icon Object tab, and consider the location, rotation and scale of the light.

Light object transform initial

The location of the light does not matter: a “Sun” type light is considered to shine from an infinite distance regardless of its location. Scale should be left as all 1’s.

The rotation of the light however is all-important to us. It is here that we set the sun elevation and azimuth. These are both counter-intuitive in Blender, so here’s how it works.

Rotation X is always 0

Rotation Y is the complement of the familiar elevation angle (that is, 90 minus elevation). E.g., if you want a sun 60° above the horizon, set Rotation Y to 30°. Think of a light in a theatre clamped to a lighting pipe suspended over the stage. The pipe is the Y axis. The light is initially pointing straight down (0°) and you are swinging it up by 30°.

Rotation Z is the azimuth of the light, but instead of beginning at 0° for North and increasing clockwise (as we are familiar with from compass bearings), it begins at 0° for East and increases counterclockwise. (Think of angles measured on a coordinate plane.) It’s generally quickest to figure out by sketching on the back of an envelope, but the actual formula is that the Rotation Z = 450 – Azimuth, and subtract 360 from your answer if it is greater than 360.

common rotations Z

A few common angles: If I want a sun 50° above the horizon, coming from azimuth 337° (North-northwest), my light’s Rotation numbers will look like this:

light rotation for 50 at 337

Do a test render (F12)

There are two good ways to do this.

The first is to switch into camera view (Numpad-0) and change the viewport shading from Solid to Render. In the 3D Viewport, locate this group of icons in the upper right:

and note the group of four circles on the right end. At present, the second one, Solid viewport shading, is selected. Click on the fourth one to change to Render viewport shading. You will see Blender begin to render your scene with the camera and lights you have configured.

The other way to do a test render is to set percentage on the Output tab to something small (e.g., 10% or 20%) and and hit F12 to get a test render at a fraction of the size of the final render. A little Render window opens and after a pause you (hopefully!) get something like this.

test render basic

Either way, doing a test render often catches common mistakes, like forgetting to set up your light or camera. It also reveals how the overall image will look, and so you can adjust your light strength, azimuth and elevation to increase or decrease shadows and the overall brightness of the image.

When finished, be sure to either switch back to Solid viewport shading or set the output percentage to 100%.

Render

On the Output tab, adjust Sampling. To the right of the word “Sampling,” note this menu icon:

This icon indicates that presets exist for this setting. Clicking on it, select Final. This increases the sampling numbers for the render, which will improve image quality.

Hit F12 to render.

Note that you can interrupt a render by hitting ESC, or by closing the render window.

I get this result:

basic render

Image>Save As allows you to save the image. If I saved this as “Blender shaded relief 1x 50 337.tif” I would then make a copy of the TFW world file I created back at the beginning and name it “Blender shaded relief 1x 50 337.tfw” This makes something georeferenced that I can pull into my GIS.

That’s it! You made a hillshade in Blender!

Now, to consider the many other things we can fiddle with in Blender—material, number of lights, denoising, etc.—go to Part 3.

Shaded relief with BlenderGIS (2020), part 1

This tutorial replaces my tutorial from five years ago,  “Shaded Relief with BlenderGIS” tutorial. At this point (March 2020) I am using Blender 2.82 and the most recent BlenderGIS addon.  Daniel Huffman’s tutorial, which uses a different technique, has also been updated.

Blender 2-up denoised

It’s rather an understatement to say that Blender is complex. To keep things from getting out of hand, I’m going to take a straight path, right through the centre of the software, with a focus on getting out the other side with a completed render of shaded relief. But at the end, I will return and explore some of the interesting side trails that lead to the features that make the Blender hillshades so interesting.

Here’s the basic procedure I will follow to create a hillshade, once Blender is installed with the BlenderGIS addon. This can function as a checklist once you’ve become familiar with the process:

  1. Prepare your DEM
  2. Read the DEM into Blender as DEM raw data build
  3. Adjust Z scaling (vertical exaggeration)
  4. Create and adjust a georef camera
  5. Correct the final pixel dimensions of the output image to match the DEM
  6. Set final image type to be TIFF
  7. Turn the light into a Sun, and adjust its properties
  8. Do a test render
  9. Do a full render

Before you experiment with using Blender to make shaded relief, you will want to try the relatively simpler step of producing your own shaded relief in GIS software like QGIS. Consequently, this tutorial won’t explain a host of things that you would learn in that process: what digital elevation models (DEMs) are, the importance of cell sizes, nodata values and projections, or the art of combining shaded relief with other layers, stretching histograms and adjusting brightness and contrast. I will assume that you already know that, at its most basic level, creating shaded relief involves specifying a sun elevation and azimuth, and selecting a vertical exaggeration for your terrain.

I use the free GDAL command-line tools gdal_translate and gdalwarp to do re-projection and re-sampling, as well as produce world files. If the command line makes you queasy, QGIS offers a graphical front-end to these tools as well. (Processing>Toolbox, and search on “translate” or “warp.”)

Your first step will be to go to https://www.blender.org/ and obtained the free animation software Blender 2.8.

Installation of BlenderGIS

You only have to do this once, and then Blender is prepared to accept geographic data.

The BlenderGIS addon has installation instructions in its wiki at https://github.com/domlysz/BlenderGIS/wiki/Install-and-usage. Basically, they go as follows.

Go to the BlenderGIS site on github and hit the Clone or Download button, and then Download ZIP. You will receive a file called BlenderGIS-master.zip which you can store pretty much anywhere. Once you’ve installed BlenderGIS, you won’t need this file any more.

Within Blender,  go Edit>Preferences, and select the Addons tab. Click Install…, select the BlenderGIS-master.zip file, and click Install Add-on from File.

Once it is installed, be sure to check the box next to 3D view: BlenderGIS to enable it.

BlenderGIS installed

Note that a new menu appears in Blender: the GIS menu.

BlenderGIS menu

Two more tweaks will make Blender easier to use.

  1. Note the default cube that is always there in a new workspace. To delete it, hover the mouse over the cube and hit the Delete key.
  2. Find the Render tab on the right, and set Render Engine to Cycles. (If you have a good graphics card, you might also want to set Device to GPU compute)
setCycles

Now go File>Defaults>Save Startup File. This means that in the future, Blender will open with no cube, and with Cycles as the render engine.

Preparing your DEM

  1. Note down the dimensions of your DEM
  2. Convert (re-project) to a metric projection
  3. Create a world file

First I will note down the pixel dimensions of my DEM, whcih in this case are 839 x 702. I will use these numbers later to tell Blender what size image to produce.

Now, the most important principle of making hillshades is that your vertical and horizontal units should be the same. If the DEM data came projected in degrees, you need to convert that to a projection measured in metres.

You’ll notice later that BlenderGIS claims that it can read a DEM projected in “WGS84 latlon” —in other words, EPSG 4326. I have found this works only when you are reading that DEM into a Blender scene already georeferenced in a metric projection.  You have to do a kind of complicated head-stand to make this occur, so we’ll take the simpler route and feed to BlenderGIS a DEM that is projected in metres. Typically for me (in northern British Columbia) this would be UTM Zone 9 North/WGS84 (EPSG 32609) or the BC Albers Equal-Area projection/WGS84 (EPSG 3005).

I will use gdalwarp to re-project, and (optionally) re-sample. In the following example, I am re-projecting out of  lat/long/WGS84 (EPSG code 4326) into UTM Zone 9 north/WGS84 (EPSG code 32609), and re-sampling to 16 metre square cells. I will assume your DEM is a geotiff file.

gdalwarp  -s_srs EPSG:4326 -t_srs EPSG:32609 -r bilinear -tr 16 16 -of GTiff -co "TFW=YES" myDEM.tif myDEM_32609_16x16.tif

Where

-s_srs
the Spatial Reference System (projection & datum) of the original (the “source”)
-t_srs
the SRS of the result (the “target”)
-r
resampling method (bilinear, cubic, etc.)
-tr
resolution, in metres (give two values, one for X and one for Y)
-of
output format
-co
additional creation options (TFW=YES means create a world file)

The terms “projection/datum,” SRS (Spatial Reference System), and CRS (Coordinate Reference System) are interchangeable for our purposes.

[I’m not going to suggest converting your DEM to Float32 data type. This makes a difference only with large scale mapping (e.g., 1:25.000 or larger), where you might actually see “steps” in your hillshade where the elevations change by whole metres. If you are doing such mapping, consider floating your DEM first.]

Finally, you will want a world file for the DEM, so you can georeference the hillshade Blender produces. World files used to be pretty common, but now that raster data so often comes with its georeferencing built in, I will explain what they are.

The world file was a clever invention: a six-line text file that contained the cell dimensions and the coordinates of the image origin (upper left corner). It was paired with the image file by having the exact same filename, but with a TFW extension (for TIFF images). For example, you could have myDEM.tif and its associated world file myDEM.tfw. This allowed the georeferencing information to be held externally to the image file.

world file

When an image file has no georeferencing (i.e., it is an ordinary TIFF image, not a GeoTiff), the world file is all your GIS software needs to correctly place and scale the image. The only additional piece that it will have to ask you for is the projection of the image, in order to make sense of the world file.

For JPG and PNG images, the world file extensions are JGW and PNW, respectively.

To make a world file I use gdal_translate as follows:

gdal_translate -co "TFW=YES" myDEM.tif deleteme.tif

This copies myDEM.tif to a dummy file called deleteme.tif (which I will immediately delete), and in the process makes a world file called deleteme.tfw. For now I’ll rename this to myDEM.tfw. Later I’ll change its name again to match the image created by Blender.

So at the end of this preparation step I have three things:

  • a DEM in a metric projection (e.g., myDEM_32609_16x16.tif)
  • a corresponding world file (e.g., myDEM_32609_16x16.tfw)
  • I know the EPSG projection code for the projection the DEM is in (e.g., 32609)

Read the DEM into Blender as Raw DEM

On the GIS menu, go Import>Georeferenced raster, and navigate to your DEM file.

In the right margin, set Mode to DEM raw data build (slow). (If you do not check the Build faces box, you will get a point cloud.)

importGeoraster options

You also have the option here of selecting the correct CRS (coordinate reference system) for your DEM. You need to do this only if you plan on bringing other georeferenced data into Blender to lay atop your DEM. If not, you can leave this at the default value (WGS84 latlon) even though that is incorrect.

[If the CRS you want to use is not on the dropdown menu, you can add it by clicking the “+” button, checking the Search box, typing in the EPSG code for your CRS into the Query box, and hitting Enter. It should appear in the Results box, and then you just check Save to addon preferences and click OK.]

addingANewCRS

After a pause, during which Blender builds a plane mesh out of your DEM, you will see a grey, 3D rendering of your terrain, floating in a coordinate space.

initial screen

A quick tour of Blender

Let’s take a brief detour here and learn some of the the language of Blender, as well as some of its controls and unexpected behaviours.

Blender, being 3D animation software, is a world of vertices (points in space), edges (which connect vertices) and faces (which fill in between edges). In this case, the cells of the DEM have been translated into a square array of vertices, each with the appropriate height for the DEM cell it represents. These vertices have been connected with edges that form a square lattice like the cells of the DEM. Each square of four edges has a face.

The whole set of vertices, etc. is a mesh, specifically in this case a plane mesh.

In the lower right  corner, you will notice a status bar:

initial stats

This tells us that we have 588,978 vertices and 587,438 faces. My DEM is 839 columns by 702 rows, and 839 x 702 = 588,978, so I could have predicted the number of vertices. This is useful to know so you can estimate before reading in a DEM, how much RAM it will require. On average (and this varies as the size of the DEM increases), every 3,500,000 vertices requires 1GB of RAM.

The number of faces is slightly less because no faces are generated by the final row and final column (839 + 702 = 1541, minus 1 face which would be generated by both the final row and final column, hence 1540 fewer faces than vertices.)

The status bar also tells us we are presently using 291MB of RAM. This will go up during the final render. Near as I can tell, available RAM is the only limit to the size of DEMs that Blender can handle.

You can see the vertices, edges and faces of your plane mesh if you zoom in and press TAB to toggle into Edit mode.

initial mesh in edit mode

Don’t worry if you can’t figure out how to do this just yet. If you do though, be sure to press TAB again to leave Edit mode before continuing on.

The upper right corner of Blender has a panel called an outliner that shows a tree of objects in the Blender world. At present it looks like this.

outliner

Notice that the plane (called “Thomlinson 32609 16m clip” in my case) is selected. (In the 3D view this is reflected by the fact that the plane is outlined in orange.) In addition there is a camera and a light.

Below the outliner is a Properties panel. On its left margin you will see a series of tabs identified by these icons. Hovering over them reveals their names.

Blender tab selector

We won’t have anything to do with some of these (Particles, Physics, etc.) The important ones for our purposes are Render, Output, Object and Object Data.

Note: sometimes the collection of tabs changes, depending on what object in the Blender world is selected. At this moment the plane is selected, but if the light were selected some of these tabs would disappear, and the Object Data tab would display a Light object data tab icon green light bulb icon.

The rest of the screen is taken up by the large 3D Viewport panel. Navigating in this is similar to using Google Earth, but a little different.

  • Rolling the mouse wheel zooms you in and out.
  • Holding down the mouse wheel button while moving the mouse in this panel allows you to rotate around objects.
  • To pan, hold down Shift and the mouse wheel button while moving the mouse to either side.

The other thing to know about Blender is that some keys work only when the cursor is over the 3D Viewport.

  • n toggles the 3D View Properties Panel
  • Numpad -. (period) centres the view on the selected object.
  • Numpad-0 toggles between this view and camera view.
  • TAB toggles the selected mesh in and out of Edit mode (and we won’t be using this).

Remember that Blender is a studio for animators. It is designed to allow you to sculpt new objects, and the generate many frames of animation in which various objects are lit by lights and shot from a specific camera. We are not going to create any new objects, and are going to shoot only a single frame; so there are many controls we will not use.

However, setting up our light and camera are crucial, and that’s what we do in Part 2.

Civilization in the United States

I came across this unexpected map the other day:

Markham, S.F., Climate and the Energy of Nations, p. 189, Intelligence in the Uniteed States

This is the kind of thing you just do not want to take out of context. So here’s the context.

This is from a little hardcover volume I picked up at a used-book store, called Climate and the Energy of Nations. It was written over the course of the 1930s by a British author named S. F. Markham, and published in 1947 by Oxford University Press.

Markham describes himself as a former aide to British Prime Minister Ramsay MacDonald, one of the founders of the Labour Party. Markham says he conceived a passion for investigating how climate—primarily temperature and humidity—affects the energy of nations. By ‘nations” Markham means both states and ethnic groups, the two being commonly conflated in the 1930s, when a state representing a specific nation was regarded as a formula for success. (E.g., Greece for the Greeks, Turkey for the Turks.) By “energy,” Markham is getting at—he never really defines this term—a sort of generalized ability to do stuff, be great, solve problems and get ahead. And, just to be forewarned, when he discusses energy it spills over pretty quickly into murky assertions about intelligence, civilization, and what nations produce “great men.”

Although ostensibly data-driven, the book somehow returns again and again to the conclusion that (since the invention of the chimney) northwest Europe has had the world’s best controllable climate coupled with the best reserves of coal and oil for heating—and therefore its people have the most “energy.” In a surprising coincidence, he discovers that the people in the world with the greatest national energy seem to be… the British.

isotherms in Europe Markham

Go figure.

So, anyway, what’s this about intelligence in the United States?

Well, Markham is quite interested in what happens when people from his ideal climate regimes move to less ideal climates. In the United States his data identifies New England and the Pacific coast as having ideal climates (north of the 75° F summer isotherm and south of the 10° F winter isotherm). The rest of the country, he concludes, at some point of the year is too cold or too hot for people to have optimal energy.

isotherms in US Markham

The map at the top of this post, “Intelligence in the United States,” is Markham attempting to support his climate point, which is that even when colonized by emigrants from the same part of northwest Europe, subsequent generations don’t do as well in certain parts of the United States.

But where did he get this state-by-state data on intelligence? Markham explains that it is from a contemporary of his, Frederick Osborn.

No assessment of a nation’s energy is or can be complete without some assessment of its culture or intelligence. … Possibly the best study ever carried out in this connection is that by Frederick Osborn of the American Museum of Natural History, who in 1933 produced an ‘Index of Cultural Development’ for the various states based on

  1. Mental test among schoolchildren
  2. Army intelligence tests
  3. Illiteracy percentages
  4. Magazine readers per 100 total population
  5. School teachers’ salaries
  6. Library statistics

(p.188)

(Notice, by the way, that Markham took Osborn’s Index of Cultural Development and mapped it as “Intelligence in the United States.” A bit of sleight of hand.)

I have to say, this is where I really put the book down and marvel at how far we’ve come since the 1930s. I mean, you would never today have someone taking these statistical indices and saying they represent cultural development—however much you agree that magazine readership and libraries are a good thing. Or, if you encountered it, you would assume it was a production of the far Right, not of a venerable institution like the American Museum of Natural History.

I’m not exactly sure what has happened in the world of ideas over the last ninety years to preclude this kind of thing now—something to do with a discrediting of positivism and the rise of critical theory. Maybe someone can fill me in.

But throughout the book one finds the evidence of what a distant and alien world the 1930s was. Markham, for example, inevitably speaks of “man:”

Since civilization is produced by men—and therefore by individuals—the question arose as to what conditions render it possible for a man to be at his best mentally and physically, for it seemed not illogical that where men enjoy conditions that permit them to be at their best there are present the raw essentials of civilization.

(p. x)

Although he rejects racial theories of predestination at the beginning of his book, the most incredible racial and ethnic generalizations seem to flow unconsciously from his pen.

Coolness is a prime essential to the physical work (including typewriting), without which all mental effort becomes, as the Arab’s, mere conversational speculation, barren in result.

(p. 215)

And then there’s his whole chapter on the ‘poor white’ problem.

There are, of course, great areas, such as India, China, and the Dutch East Indies, where no permanent white settlement has taken place, but in comparable areas, such as portions of South Africa, the southern states of the United States and the West Indies, there has arisen the problem known to the world as that of the ‘Poor White.’

(p. 134)

Well, back to the maps.

Continuing to demonstrate his climate theory, Markham presents two more maps that portray things he feels represent national energy: “Infantile Mortality 1930-34” and “Per Capita Income 1940.”

Markham, S.F., Climate and the Energy of Nations, p. 183, Infantile Mortality 1930-34Markham, S.F., Climate and the Energy of Nations, p. 192, Per Capita Income, 1940

[Canadians—well, certain Canadians—can take heart from that little caption below the figure on infant mortality: “If this map were extended to include Canada, British Columbia and the Ontario peninsula would be in the best (i.e., lowest) area.”]

Things aren’t looking good for the American South or mid-continent, but readers of Mark Monmonier’s excellent book How To Lie With Maps might have some intelligent questions to ask here about how Markham chose his data breaks. Was there a natural break at 58 deaths out of 1000 births, or was that chosen because it helped this map look like the isotherm map above?

Finally, he wraps it all up into an aggregate map whose data he says is a combination, with equal weighting, of the data behind the previous three maps. This map, he says, represents civilization.

Markham, S.F., Climate and the Energy of Nations, p. 195, Civilization in the United States

But of course when you look at it the first thing you’ll think is these are the two sides in the American Civil War. How Markham missed considering that the devastation of the South in that war might outweigh climate as an explanation for its scoring relatively poorly in the 1930s, I don’t know. I think he was doing what we should all never do in a Science Project: looking for data that confirmed his hypothesis.

[And, just a cartographer’s observation: that little wavy dividing line across Missouri—it’s quite suspicious. This is ostensibly state data: so how did they divide Missouri as if it were county data? Does this represent some embedded feud, some cultural rift,  between north and south Missouri that I don’t know about?]

At the end, Markham includes a chapter on air conditioning, which he thinks will “change the whole course of history in the United States,” a country that he sees as being for the most part burdened by its climate. However, he points out that while air conditioning makes office and factory work pleasant, it does not help the person working outside. In a cold climate, working outside keeps one warm, but in a hot climate “activity adds to one’s feeling of malaise.”

Climate and the Energy of Nations is kind of an entertaining read, if you’re able to tolerate how disturbing the 1930s was. I’m still not clear on whether Markham was invested in the status quo, or sought to change it. But, by choosing climate as his determiner of the “energy of nations” he picked something that (he would have thought) would never change. It’s a deterministic model of who will thrive. And to that extent, the book does get you thinking then about how hot and cold affect us all, and how climate change could yet have additional bad results we haven’t even clocked.

 

 

 

 

 

Georeferencing (registering) a map in a Lambert projection

This is a procedure that came up in a discussion with a friend, and I think it is tricky enough to be worth recording here.

Specifically we are using QGIS 3 to georeference a 1941 map of the Odessa, Ukraine, area, one of the 1:1,000,000 International Map of the World Series.

Odessa 1941 reducedThis map is bounded by latitudes 46° and 51° north, and longitudes 27° and 36° east.

We want as little loss of image quality as possible, therefore we want to avoid warping (re-projection). If warping the map were not a concern, we could georeference it in a geographic projection (e.g., EPSG 4326 or 4267) with a few ground control points (GCPs) in degrees, using the intersections of latitude and longitude lines.  But in this case we need to georeference it in the original Lambert projection, as it was printed. The transformation will be “linear” and in fact only a world file will be written. The world file will enable QGIS to read in the map image without warping it.

This will not be possible unless we can figure out what the original projection was. Fortunately at the bottom of the map the person who did the scanning has left the statement of projection.

projection text

A Lambert conic projection usually relies on four parameters. There are the two parallels of latitude at which the cone touches the globe: these are the two numbers listed here: 36° north and 52° 48′ (52.8°) north. They will be called lat_1 and lat_2 in the projection definition.

Then there are the coordinates of the origin point of the projection: lon_0 and lat_0. The meridian that runs straight down through the centre of the map is clearly the central meridian of the projection, because it runs perfectly vertically, the only longitude line that does so on the map. The centre of the map falls halfway between longitudes 31 and 32, so lon_0 is 31.5°.

Lat_0 is a little harder to figure out. However, in my experience it doesn’t really matter what you choose for lat_0. I chose 40° north.

The first step then is to create a new CRS (coordinate reference system) in QGIS for this Lambert Conic projection. We go Settings>Custom Projections, and click the “+” button to add a new CRS.

new CRS

Plugging in our parameters we determined above into the normal Lambert Conic definition, we get this:

+proj=lcc +lat_1=36 +lat_2=52.8 +lat_0=40 +lon_0=31.5 +x_0=0 +y_0=0 +ellps=intl +units=m +no_defs

QGIS will assign an EPSG number for your new CRS. In this case I got 100030, but they are always different (and greater than 100000).

The next step is for me to put my main map window in QGIS into the new projection, and open an array of latitude and longitude lines, such as the Natural Earth 1:10 million scale one-degree graticule layer. And I turn on labels for these lines so I can see what degrees I am looking at.

display graticule in custom projection

The reason I do this bears directly on the central technique we are going to use here. Because the original map is in Lambert, and I want to register it in Lambert, I will have to enter Lambert coordinates for each of the GCPs. But looking at the map I only see degrees: I don’t see Lambert coordinates.  Fortunately QGIS can tell me what the Lambert coordinates are for each grid intersection, as long as I am displaying my grid in the same CRS as I want to use to register the map. You can witness this by zooming in on and hovering your mouse over a grid intersection and looking at the Coordinate text box in the bottom margin.

noting Lambert coordiantes

There are the Lambert coordinates—at least for this specific Lambert Conic projection we are using—for 46° north, 26° east: -421460 metres east, 674061 metres north.

We don’t need to write these down though. We will get them assigned to the map image in a more elegant way.

Now it’s time to open the georeferencer (Raster>Georeferencer) and bring in the map image. Immediately you will be asked which CRS you want to georeference this map in. Choose the custom CRS you just made (in my case, 100030).

image read in

Before you place GCPs, go to the settings of the georeferencer and set up your transformation parameters to ensure no warp will result.

transformation settings

You want transformation type to be Linear. The resampling method is not important because no resampling should occur. The target SRS is your custom CRS. And you have checked the box called Create world file only (linear transform). (I also like to check the Load in QGIS when done box.)

Note that in the georeferencer the bottom line now looks like this:

georef parameters on status line

I’m going to place the first GCP in the lower left corner (the southwest corner), which is 46° north, 27° east. Before I do this, I go into the main map window and zoom in on just that intersection, to a fairly large scale, say 1:10,000.

Now in the georeferencer I place my GCP point on 46° north, 27° east, and QGIS asks me for the X and Y of that point. Or, I can click From map canvas.

first GCP

Once I click From map canvas, the georeferencer is hidden, the cursor becomes a cross, and I am invited to pick that same point on the main map canvas. I carefully click right on the intersection of 46° north and 27° east, and immediately the Lambert coordinates of that point as filled in for me in the dialogue box:

lambert coordinates transferred

I click OK, and I’m ready to do my next GCP. Remember, the first thing I will do is pan the main map to the coordinate intersection where I’m about to add this second GCP.

A linear transformation never requires more than three GCPs, but I like to do four so I get an estimate of my error. I do the four corners of the map.

At this point I can see, from the GCP table at the bottom of the georeferencer, how my error is.

GCPs created

The residual looks like 3 to 5 pixels, and the mean error is 6.5 pixels, quite acceptable in this image which is 4700 x 5700 pixels.

Now I hit the Start Georeferencing button (green triangle “play” button) and a world file is written. Because I’ve checked Load in QGIS when done, I immediately get asked for the CRS of the new georeferenced map, and again I select the new custom CRS I created for this map.

The map appears in the main map window, and I drag it under the graticule layer, to see that it is properly georeferenced.

map georeffed

There is no new raster image with this linear transformation, just a small text file with the same name as the map and a .WLD extension.

To clip off the collar of the georeferenced map, in case I want to mosaic it with adjacent maps, I can create a polygon in a temporary layer that covers the part of the map I want to keep…

ready to clip 0

 

and then do Raster>Extraction>Clip raster by mask layer. I like to check Create an output alpha band and Keep resolution of output raster.

ready to clip

And the result is a clipped raster with transparency in band 4.

clip finished

 

Octagons in Baku

Besides the window in the Divankhana, I saw a lot of geometric design in Baku based around octagons.

For example, consider this pattern in a window in the external courtyard wall at Baku’s Taza Pir mosque.

DSCF5870

This beautiful pattern, with its eight-pointed stars set within octagons, turns up on plate 67 in Jules Bourgoin’s 1879 Les Éléments de L’Art Arabe (which you can download from archive.org).

Bourgoin plate 67 3x3

It’s wallpaper group is the fairly common *442 (p4m) and it is generated by tessellating a square cell.

Bourgoin plate 67 single tileConstruction of this pattern is straightforward. The eight-pointed star in the centre is inscribed in a circle whose radius is one quarter the side of the square. The vertices of the octagon are found by extending the sides of the star. The rest of the construction lines are extensions of the octagon sides, and lines connecting star dimples that are three apart.

 

bourgoin pattern 67 construction lines

But one enters the Taza Pir compound via a stairway from the street. The panels in the stairwell are related, but subtly different from the window design!

DSCF5873

Seen head-on…

taza pit entryway pattern 4-tileWhat did they do here?  There is the same eight-pointed star in the centre, and the same enclosing octagon, but in this case they’ve trimmed back,  to the borders of the octagon, the square that one repeats.

taza pit entryway pattern square2

As a result, the square tile borders stand out strongly as lines, and around each point where four tiles meet we get a big diamond holding four small diamonds.

taza pit entryway pattern 12-tile

(This also belongs to the *442 wallpaper group.)

Now, in the Old City, I came across a piece of octagon-based decoration that illustrates what happens if one doesn’t follow best practices, as explained by Eric Broug.  This pattern involves starting with the same pattern as in the Taza Pir windows (above; a.k.a. Bourgoin’s Plate 67), but then repeating a somewhat random subset of it. In other words, incorrect tessellation.

DSCF5758

It would appear that the manufacturer of these pre-cast concrete blocks selected a piece out of the overall pattern that was not the all-important basic square, but rather a rectangle.

subset taken for pre-cast

Hence each of the concrete blocks looks like this:

single block

When you put them together, lines match up, but the effect of the original design is lost.

tiled pattern

The wallpaper group of this pattern would be*2222 (pmm).

Elsewhere in the Old City, there were pre-cast patterns that did tessellate pleasingly, again with octagons.

DSCF5747

But back at the Taza Pir mosque, I spotted this on an adjacent building, which I believe is Baku Islamic University:

Taza Pir

The grill pattern is octagons packed together, with squares in between; and eight radial lines emanating from the centre of each octagon. It’s basically the central column of this pattern:

Taza Pir end pattern

But look at what they did in the point of the arch. It’s beyond my knowledge to know whether this is best practice or not, but it is definitely creative.