The Hollow Plain of Ka’ra

I learned about the existence of  the hollow plain of Ka’ra, in Iraq, when I was reading Gertrude Bell’s letters.

On the 10th of February, 1911, Gertrude, who is forty-two years old, sets out across the Syrian desert from Damascus to go to Hit, some 600 km east, on the Euphrates River. Both of these cities are, at this time, in the Ottoman Empire, so there are no international borders to be crossed.

She begins on a horse.

I rode my mare all day, for I can come and go more easily upon her, but when we get into the heart of the desert I shall ride a camel. It’s less tiring. (Feb 10)

Not alone, Gertrude (who is fluent in Arabic) is in part of a party of fifteen, some of whom are her employees. She describes them as…

myself, the Sheikh, Fattuh, ‘Ali and my four camel men, and the other seven merchants who are going across to the Euphrates to buy sheep.

For much of this journey they are outside of the zone of Ottoman control.

In half an hour we passed the little Turkish guard house which is the last outpost of civilization and plunged into the wilderness.

Their exact route is not easy to trace from her letters—she does not give many landmarks—but on February 16th she reports that

We came to the end of the inhospitable Hamad today and the desert is once more diversified by a slight rise and fall of the ground. It is still entirely waterless, so waterless that in the Spring when the grass grows thick the Arabs cannot camp here.

She uses the term Hamad to denote the core of the Syrian desert, the highest, flattest part—although other writers call the entire the Syrian Desert the Hamad. On the next day (17th) she writes that they have deviated from their route, which, up to this time, had been almost due east.

So it happened that we had to cut down rather to the south today instead of going to the well of Ka’ra which we could not have reached this evening… the whole day’s march was over ground as flat as a board, flatter even than the Hamad…We had a ten hours march to reach the water by which we are camped.

The 18th…

we got off half an hour before dawn and after about an hour’s riding dropped down off the smooth plain into an endless succession of hills and deep valleys – when I say deep they are about 200 ft deep and they all run north into the hollow plain of Ka’ra.

This is the last we hear about the hollow plain of Ka’ra, which apparently has a series of north-flowing canyons running into it from the south. On the 20th they arrive at the ruins of Muḩaywir in the Wādī Ḩawrān.

We rode today for 6 and a half hours before we got to rain pools in the Wady Hauran, and an hour more to Muhaiwir and a couple of good wells in the valley bed.

The Wādī Ḩawrān and Muḩaywir are not difficult to locate. They show up on this 1959 Times Atlas map of the Middle East, along with the sites she now visits on her way to Hit: Amij, Khubbaz and Kubeisa.

Times_atlas_markup

But Ka’ra (or “Kara” as it is spelled in the printed edition of Bell’s letters, rather than the online archive of her diaries and letters) is not there.

Now, if you’re quicker than me, you probably already picked up that on this map, just to the west of Wādī Ḩawrān, there is a “Jumat Qa’ara”—and reasoned that this might be what Gertrude referred to as the hollow plain of Ka’ra. But I missed that, and began a pointless search of Google Maps, OpenstreetMap, geonames.org and Wikipedia for something called the Ka’ra.  (There is a Wikipedia entry for “Kara Depression,” but this is a Kara Depression in northern Russia.)

I did, however, noticed on the shaded relief of OpenTopoMap that 40 km west of Muḩaywir there was a 50-km-wide depression with a series of canyons flowing into it from the south. Was this the “the hollow plain of Ka’ra?”

opetopomap_annotated

But this feature goes unnamed on online mapping sites.

This shows the weakness of much of online mapping: it is point-based. Area features, which are readily labelled on what we can call “static maps” (maps designed to be printed, or to be a single image you can’t zoom in on) do not make it into the database that underlies slippy maps. Neither do linear features, like rivers. I do not know why OSM, Google et al. try to make everything into points, but points dominate online mapping.

(As an amusing exercise, try typing “Yangtze River” in the search box on Google Maps. You don’t get a very satisfying result.)

 

However, by luck I found an article by another famous British archaeologist,  Sir Aurel Stein, written some twenty-nine years later. He was writing about his search for Roman forts along the line from Hit to Palmyra. A lot had changed. World War I had happened; the British had created the mandate state of Iraq; they had built the pipeline to carry the oil from Kirkuk to the Mediterranean, and put pumping stations along it; aircraft were in common use. Stein wrote…

after gaining the pipe-line station H2 for a base, we resumed the survey of the ancient trade route which had led from Hit to Palmyra. I was able to recognize its line clearly both from the air and on the ground also over long stretches right up to where Pere Poidebard had before determined its continuation beyond the Syro-‘Iraq frontier. The line proved to have led with characteristic Roman straightness right across the wide sandy depression of Qa’ara, and not as had been supposed before past the ruins of Qasr Helqum.

Ah, “the wide sandy depression of Qa’ara!” And on his map, there it is, just north of the H2 pumping station, and northeast of Mlosi, labelled as “Al Qa’ara”.

Stein map 1940

Ironically, the online mapping sites do not even show the H2 pumping station and airfield, although it used to be a standard feature on static maps, like this National Geographic 1960s map of the Middle East.

NG Middle East 1970 detail

Mlosi, which Stein also called the “well of Mlosi,” is shown on the National Geographic map as Bi’r al Mulusi, and is probably the same as the Ābār al Malūsī (ابار الملوسي) located by geonames.org at N 33°29′48″ E 40°06′14.  (Ābār being the plural of Bi’r, a well.) Qasr Helqum would be Qaşr al Ḩalqūm which geonotes.org places visibly on the north rim of the depression.

Google Hybrid with annotation

Google Hybrid of the hollow plain of Ka’ra/Al-Qa’ara, with labels added manually

Bell doesn’t indicate whether she had heard of the Ka’ra before, but Stein refers to “the wide sandy depression of Qa’ara” as if it is well-known. How did he learn about it? What maps was he using? It would also be nice to know how it is spelled in Arabic, so we could know how it should be transliterated to the Latin alphabet.

 

GE view of Qa'rah

Al Qa’ara as seen on Google Earth, looking southeast, with the “deep valleys” running “north into the hollow plain.”

Well, it turns out, if I look at more static maps, the hollow plain is consistently labelled.

The 1986 Soviet 1:200,000 scale topographic map I-37-23, has “ВПАДИНА КААРА” ВПАДИНА is a depression.

Kaara on Soviet 200K I-37-23

A 1944 by British Naval Intelligence, from the Perry-Castaneda Library: shows it as “JUMAT QAARA.” (And this is what I see, looking back at the Times Atlas, above.)

1944 Naval Intelligence western Iraq detail

And the 1942 map from the US Army Map Service (NJ-43-11 “Rutba”), calls it JUMAT AL QAARA

1942 AMS quarter inch I-37 Q Rutba detail

There are also a number of recent geological papers about this feature. Mustafa and Tobia have one called Modes of Gold Occurrences in Ga’ara Depression, Western Iraq, in the Iraqi Bulletin of Mining, 2010. Their map leaves no doubt that the Ga’ara Depression is the same as Bell’s Ka’ra and Stein’s Al Qa’ara.

Mustafa and Tobias map

The paper’s abstract is in Arabic, so we can see how they render “Ga’ara Depression” in Arabic.

Tobia article titles

“In Ga’ara Depression” is in_ga'ara_depression_ar, which uses the unusual letter gaf-with-line (گ), a variant on kāf (كـ ) which is regularly used in Persian, a language that has a /g/ sound. منخفض Munkhafidun is a depression. 

It’s a confusing variety of transliterations: Ka’ra, Qa’ara, Ga’ara. What helps make sense of it is that (as I learn at https://en.wikipedia.org/wiki/Varieties_of_Arabic), the letter  ق, pronounced /q/ in classical Arabic, has become a /g/ in both the Iraqi and Nejdi dialects. Typically they still spell these words with a  ق  (as in Qa’ara), but sometimes they are using the gaf-with-line (as in Ga’ara).

Poking through an Arabic-English dictionary I see that the Q-‘-R triliteral root in Arabic means to be deep, or hollowed out. So, ironically, Qa’ara may simply mean the deep, hollowed out place. This may be why Gertrude Bell called it “the hollow plain of Ka’ra.”

So, now I know where this hollow, sandy plain is, but I am left with one mystery that I can’t solve. On the maps where it is labelled Jumat Al Qaara, or Jumat Qa’ara, what does Jumat mean? Jum’ah (جمعة) is the word for Friday, but it seems unlikely that this is the Friday of Hollows. The J-M-‘ root means to gather or collect, so conceivably this is the Collection of Hollows? I could not find other Jumats to compare this to. Any ideas from Arabic speakers would be welcome.

 

 

 

How do we understand the size of Syria?

Syrian_comparison_inset_2004_CIA

This inset appears on a CIA map of Syria from 2004. We can assume it’s meant to give the map reader a sense of the size of Syria by comparing it to a region that he or she is familiar with.

I’m fascinated, first, by the assumption that the decision-maker reading the map is located in the mid-Atlantic states, (Washington, presumably). This kind of regionality runs throughout American politics, a geography of where important people live, and where they don’t, that one carries in the memory and consults without even knowing it. 

This is an attempt to make the map *personal* but I almost think it should be captioned, “This might be helpful to you if you happen to live in the northeast.”

The second thing I’m fascinated by is, assuming we have to stick to the northeast USA, would it have been a smarter decision  to have aligned Damascus with Washington, DC? These are the two national capitals. Such a juxtaposition would, in the context of the Syrian Civil War, allow the President to imagine New York no longer doing his bidding, and Providence functioning like an independent state.

CIA inset Washington and Damascus aligned

Perhaps this juxtaposition puts too much of Syria out to sea, but it does put New York City more or less where Raqqa, the ISIS capital, was.

Of course one must always be careful when constructing these comparison maps. In GIS software a polygon (Syria) whose coordinates are in degrees, dragged to another latitude, will be the wrong size. The safer method, which I have used here, is to make separate maps at the same scale of both Syria and the northeast US, and then juxtapose them in a photo editing program (the GIMP, in my case).  Syria is about 780 km from end to end on its long axis, about the distance from Boston to Richmond, Virginia.

This brings up the question of what it means to understand distance. In the original map, what does the distance from, say, Philadelphia to the eastern tip of Tennessee, mean to people living in Washington? I suspect they rarely go very far to the southwest from the city, and when they do they experience slow, twisting roads that have difficulty passing through the Appalachian mountains. So if the Washington-dweller says “It would take me six hours to drive to the tip of Tennessee!” is there anything meaningful at all there for his understanding of Syria?

I kind of prefer this juxtaposition, both for the similarity of desert landscape, and the sense of distance.

CIA inset LA and Damascus aligned

Los Angeles stands in for Damascus here, and Las Vegas finds itself somewhere up on the Euphrates. (“Las Vegas on the Euphrates” is not yet, but may someday be, the tourism slogan of Deir Ez-Zur or Raqqa.) If you are familiar with the distances and deserts of the American southwest, it is remarkable how much smaller Syria looks when shown like this.

Of course, as Appalachian Trail hikers know, there happens to be a town in Virginia called Damascus. It’s actually right there, just across the border from that eastern tip of Tennessee. So perhaps the best juxtaposition of all aligns Damascus, Virginia, with Damascus, Syria.

CIA inset Damascus and Damascus aligned

 

 

Digital Atlas of the Roman Empire

Fans of the Digital Atlas of the Roman Empire basemap (or DARE basemap) may have noticed that it disappeared from Peripleo a few months ago!

Peripleo is the superb mapping site where you can look up the locations of places in the ancient world. It is is absolutely indispensable in those moments when you just can’t remember which modern town occupies the site of ancient Mesembria. Which happens to me a lot. Also Sirmium.

At present when you look at the available layers in Peripleo, The satellite layer (“Aerial”), Openstreetmap (“Modern Places”)  and the empty basemap are there, but no DARE (“Ancient Places”).

peripleo map layers dialogue

Even more worrisome, the old URL for visiting the DARE map where it was hosted at the University of Lund, Sweden, http://dare.ht.lu.se/,  was not responding.

However, DARE is now back! The map has been moved to a new location at the the Centre for Digital Humanities, University of Gothenburg, https://dh.gu.se/dare/.

new DARE

Many thanks to its creator and maintainer, Johan Åhlfeldt.

Better yet, he has also given out the URL for the tile server itself, so if you run QGIS you can now add DARE as a basemap layer in the QuickTileServices plugin.

Here’s how:

Go Web>QuickMapServices>Settings

settings

Go to the Add/Edit/Remove tab.

adEditRemove

Click the ‘+’ button next to My groups, and create a DARE group.

addGroup

Click OK and then click the ‘+’ button next to My Services. First add the name of the service and choose TMS for Type

addService1

then go to the TMS tab and fill in those parameters:

addService2

Click OK and then Save, and your service is set up.

It should now appear on the Web>QuickMapServices menu.

dare example

Also, since tile-based services operate at zoom levels that correspond to a very strange  set of scales (like 1:1,155,583), remember that you can always snap to the nearest tile scale by going Web>QuickMapServices>Set proper scale.

Happy ancient world mapping!

 

Shaded relief with BlenderGIS (2019), part 3

[Back to Part 2]

Why do this?

The shaded relief images (also often called a hillshade) that GIS software produces are pretty good. Here’s one made with QGIS.

QGIS 2-up

The hillshade by itself (left), and blended with hypsometric tinting (right)

Hillshading algorithms typically take three parameters: the elevation of the sun, the azimuth (direction) of the sun, and the vertical exaggeration of the landscape. (60°, 337° and 1X, in this case.) They do some ray-tracing to figure out what parts of the landscape are being hit by the sun and what parts are in shadow, and then produce a greyscale image.

You can tweak the brightness and contrast in your GIS software, or stretch the histogram to your liking. This can do a lot to lighten up the darkest shadows, or to turn the hillshade into a ghostly wash that merely suggests relief without dominating everything else on the map.

Much of the time what we can get out of GIS software meets our needs for a hillshade, which, after all, is not the final map but a merely a layer of the map. (Although occasionally it is the pièce de résistance…)

But perhaps you don’t like the glossy, shiny quality of that hillshade above. You think the mountains look like they were extruded in plastic. They remind you a little too much of Google maps Terrain layer. Maybe you’d like something that looks more like it was chisled from stone, like this…

Blender 2-up

Or perhaps you are interested in…

Blender 2-up 2 suns

A second sun, shinning straight down to eliminate the darkest shadows

Blender 2-up warm and cool light

A warm (yellow) sun in the northwest, cool (blue) shadows

Blender 2-up denoised

Denoising performed on the image after rendering

This might be why you are investigating Blender.

The big settings that make a difference (besides the conventional settings of azimuth, elevation and vertical exaggeration) are…

  • material
  • multiple lights and their colours
  • amount of light bounce
  • denoising the render

Material

In animation modelling, Material is what reflects, absorbs and scatters light. Blender spends much of its time tracing light rays and deciding how the surfaces they encounter affect them.

With your plane selected, go to the Material tab icon Material tab and hit New. The default material that comes up has these Surface properties. (And this is what Blender used for your basic render.)

Principled BSDF initial properties

Principled BSDF is a sort of super-versatile material that allows you to have all of these properties (subsurface scattering, metallic look, sheen, transmission of light) that in earlier versions of Blender were assigned to specific surface types, like “Diffuse BSDF,” “Glossy BSDF” or “Subsurface scattering.”

(BSDF stands for “bidirectional scattering distribution function.”)

These various surfaces are actually shaders, which are pieces of software that render the appearance of things (and may actually run on your GPU, not your CPU). If you click on “Principled BSDF” next to the word “Surface” a list of all of the possible shaders comes up.

shader list

You can learn a lot more about shaders in the Blender manual, but Blender essentially wants to give you the tools to be able to simulate any material, from water to hair, and some of the effects you can get applying this to shaded relief are pretty weird.

8 kinds of material

The same piece of terrain rendered with different shaders. Top, left to right: Diffuse BSDF with a wave texture, Glass BSDF, Hair BSDF, Principled BSDF. Bottom, left to right: Toon BSDF, Translucent BSDF, Translucent BSDF with Principled volume emission, Velvet BSDF.

The main shader you probably want to play with is the Mix Shader, which allows you to mix the effects of two different shaders. The Mix Shader’s  factor (from 0 to 1) determines how much the results are influenced by the second shader.

Mix Diffuse Glossy

On left, the original render; on right, a Mix shader (Fac=0.2) of Diffuse BSDF (defaults) and Glossy BSDF (IOR=2). This adds just a bit of glossiness to the surface.

 

Mix Diffuse Toon

On left, the original render; on right, a Mix shader (Fac=0.3) of Diffuse BSDF (defaults) and Toon BSDF (defaults). This brings in bright highlights that tend to wash out flat surfaces.

The other aspect of material that is worth experimenting with is value. Blender’s default material, the Principled BSDF, has a default colour of near-white, and its value is 0.906 (on a scale of 0 to 1).

principled BSDF default base colour

By darkening this you create a more light-absorbing material.

Principled BSDF base colour 0.5 lamp 45 337 str 3 v2

On left, the original render; on right, Principled BSDF (base color value turned down to 0.5).

You might think that the effect of a darker material is just to shift the distribution of pixel values down in the render, and that you could get the same effect by turning down the brightness of the original. But if you look at the histogram of each image you see the pixel values are distributed differently, and even when both are displayed with “histogram stretch,” they are distinct from each other.

base color 0.906 versus 0.5 both histogram stretched

Left: the value of the base color of material (Principled BSDF) is 0.906 (the default). Right: the value of the base color is 0.5. Both are displayed with histogram stretch.

And of course if it works for your map you can give the material real colour. (Don’t forget to save the render as RGB rather than BW). This, however, is very similar to colorizing your shaded relief in GIS software.

sandstone yellow material

Multiple lights and their colours

To add another light to your scene, go Add>Light>Sun. You can also experiment with adding other types of lights: point lights (which shine in all directions from a specific place), spot lights (which send out a specific cone of light) and area lights.

A second light can bring out features in a very nice way.

second sun from the east

On left: vertical exaggeration 2x, one sun in the NNW, elevation 45°, angle= 10°, strength = 3. On right: the same scene plus a second sun in the east, elevation 45°, angle = 1°, strength 0.5.

By giving colour to lights, you can create differential lighting and colour.

 

Diffuse 2x BSDF default 45 337 str 3 color orange LAMP2 str 0.5 az east color yellow

Same scene as above, but now the sun in the NNW is orange, and the sun in the east is yellow.

There is one more light in your scene that often goes unnoticed, and this is what Bender called the world background colour. You can think of this as a very far away surface that nonetheless does contribute a bit of light to your scene—from all directions. You can see the world colour in action if you render your scene with the sun strength turned down to 0.

world background only

No lights in the scene: mysteriously there’s still something there. This is the effect of the world background colour.

If there are places in your scene that sunlight does not reach, the world background still contributes to their illumination—and colour.

By default the word background colour is 25% grey (value = 0.25), but you can change this to a dark blue if you would like dark blue light to collect in your shadows.

world background blue

World background colour set to blue (Hex 414465), strength 1. Secondary pale yellow (Hex FFE489) light in east, strength 0.5

 

Amount of light bounce

Part of the charm of Blender is that it calculates how much light reflects off surfaces and then hits other surfaces, which is why even shadowed areas have some light. You can control, however, the number of bounces a light ray has before it expires. This is on the Render tab icon Render tab, in the Light Paths section.

light paths default

By default, the light paths settings are as above. With the material I am using, the Max Bounces for Diffuse materials is the important part, and by default it is 4.

The presets exist icon icon to the right of “Light Paths” indicates that there are presets available. Clicking here reveals three key presets:

  • Direct Light: light gets few bounces, and none off diffuse material
  • Limited Global Illumination: light gets one bounce off diffuse material
  • Full Global Illumination: light gets 128 bounces off diffuse material

Changing among these does not make a huge difference, but in scenes with deep shadows it is visible.

light paths compared

Clockwise from top left, the presets are: Direct Light, Limited Global Illumination, Full Global Illumination, and the default settings. (With vertical exaggeration at 2x)

Denoising the render

Blender offers the post-processing feature of denoising the render. I liken this to a blanket of snow on your landscape: it erases tiny details.

denoise comparison

Left: 2x vertical exaggeration, sun in NNW, elevation 45° strength 3. Right: the same, plus denoising (Strength = 1, Feature Strength = 1)

To activate denoising, go to the View Layer tab icon View Layer tab, scroll to the very last section, Denoising, and check the Denoising box.

denoising panel

I do not find that denoising does much if you keep the default settings. I tend to turn both Strength and Feature Strength up to 1.0.

In conclusion

This is only been the tip of the iceberg. Some things I have not talked about are

  • Applying a subdivision surface modifier to your mesh so that Blender is interpolating and smoothing it on the fly.
  • BlenderGIS’s ability to read your DEM As DEM Texture, which creates a plane with subdivision and displace modifiers instead of a plane whose every vertex corresponds to a DEM cell.
  • How to focus the orthographic camera down on one small part of your DEM where you can do quick test renders while you work out lighting, material, etc.
  • The node editor for complex materials
  • Using a perspective camera to shoot a scene that is not straight down
  • Cutting DEMs into tiles for Blender to work with, when the whole DEM is simply too much for the RAM on your computer.

Have fun exploring Blender’s many features!

Shaded Relief with BlenderGIS (2019), part 2

[Back to Part 1]

In the overall process we are now at step 3, but things will go faster now.

  1. Prepare your DEM
  2. Read the DEM into Blender as Raw DEM
  3. Render tab: select the Cycles rendering engine
  4. Adjust Z scaling (vertical exaggeration)
  5. Create and adjust georef camera
  6. Output tab: correct the final pixel dimensions of the output image to match the DEM. Set final image type to be TIFF
  7. Turn the light into a Sun, and adjust its properties
  8. Do a test render (F12)
  9. Render

At the end I’ll cover some of the variations on this process and extra tweaks you can do.

Render tab: select the Cycles rendering engine

Go to the Render tab icon Render tab and observe the Render Engine setting:

Render tab engine and sampling

Blender 2.8 comes with the Eevee render engine as the default, but Eevee is just a basic  engine for previews. Change it to the Cycles render engine.

Render tab cycles selected

If you have a powerful GPU (i.e., video card), you can set Device to GPU Compute.

While you are here, adjust Sampling. To the right of the word “Sampling,” note the menu icon presets exist icon. This indicates that presets exist for this setting. Clicking on it, select Final. This increases the sampling numbers for the render, which will improve image quality.

Adjust Z scaling (vertical exaggeration)

The first thing to adjust on your plane is what we usually call the vertical exaggeration, but Blender thinks of as the Z scaling.

With the plane selected, go to the Object tab icon Object tab. Here you should see a number of sections, the first of which is the Transform section.

Object Transform section

Under Scale increase the Z value if you want to create vertical exaggeration. You should see your plane change shape.

[It’s important to increase Z scaling before setting up the georef camera, because if you increase Z scaling afterwards, you may accidentally have set the camera below the tops of the highest features.]

In my case the terrain has plenty of relief, so I’ll leave Scale Z at 1.000.

Create and adjust georef camera

Make sure the plane is still selected (in the Outliner) and go GIS>Camera>Georef.

A new camera (“Georef cam”) will appear in the Outliner. You do not need to delete the old camera.

When the georef cam is selected, then in the 3D Viewport you’ll see something like this (you may have to pull back).

georef cam createdThis is an orthographic camera (meaning, in effect, that all points of its lens look directly down: there is no perspective distortion) placed over the plane.

To check that your camera is properly pointed at your landscape, and sees all of it, hover the mouse over the 3D Viewport, and hit 0 on the numeric keypad (or go View>Viewpoint>Camera). You should see what the camera sees (“camera orthographic view”)

camera orthographic view

Hit NumPad-0 again (or go View>Viewpoint>Camera again) to go back to regular (“User perspective”) view.

Output tab: Correct the final pixel dimensions of the output image to match the DEM, and set final image type to be TIFF

Go to the Output tab icon Output tab, and under Dimensions check the Resolution X and Y that were set up when you created the georeferenced camera. Also note the percentage (%), which is 100% at this point.

output resolution

Change Resolution X and Resolution Y to match the pixel dimensions of your DEM. In my case, the initial DEM was 839 x 702 pixels, so I enter these two numbers for X and Y. With the dimensions of the hillshade matching those of the DEM I can, at the end, apply the DEM’s world file to the Blender output, and georeference it.

It’s handy at this stage in the process to set percentage to something small, like 20%. This way you can do some quick test renders: Blender will produce a rendering whose pixel dimensions are only 20% of the overall Resolution numbers you set. (Before the final render you’ll be back here and set this to 100% again.)

Lower down on the same tab, under Output, set the parameters for the type of image you want.

Output output parameters

I typically set these to File Format = TIFF, Color=BW and Color Depth = 8, but you can get JPG, PNG, etc. If you choose later to assign colour to the world background and the sun,  other than grey or white, you will probably want to set Color to RGB or RGBA.

Turn the light into a Sun, and adjust its properties

Your plane is built, your camera is set: now all that is left to arrange is the Sunlight.

Select the Light in the outliner, and go to the Light object data tab icon Object Data tab.

Initially you should see something like this under Light:

Light object data Light initial

Click on Sun, and then change Strength to 1.

Click on the Use Nodes button below, and set its Strength to 2. It should now look like this.

Light object data Light final

You will want to play around with the Strength setting (in the Nodes section) when you do test renders. The strength of the light affects how bright your hillshade is.

The Angle setting for the sun is important. The Angle is how many degrees across the sun’s face looks from the ground. A sun with a smaller angle produces shadows with sharper edges.

sun angle comparison

Sun angle of 1° (left) and 10° (right)

A 1° sun is essentially a point source, and it casts sharp shadow. At 10° sun is bigger and the edges of shadows are diffuse. Take your pick.

Note that if you decide to use a coloured sun later (see “Part 3“) this is where you will change its colour: Light>Object Data>Nodes>Color.

Now go to the Object tab icon Object tab, and consider the location, rotation and scale of the light.

Light object transform initial

The location of the light does not matter: a “Sun” type light is considered to shine from an infinite distance regardless of its location. Scale should be left as all 1’s.

The rotation of the light however is all-important to us. It is here that we set the sun elevation and azimuth. These are both counter-intuitive in Blender, so here’s how it works.

Rotation X is always 0

Rotation Y is the complement of the familiar elevation angle (that is, 90 minus elevation). E.g., if you want a sun 60° above the horizon, set Rotation Y to 30°. Think of a light in a theatre clamped to a lighting pipe suspended over the stage. The pipe is the Y axis. The light is initially pointing straight down (0°) and you are swinging it up by 30°.

Rotation Z is the azimuth of the light, but instead of beginning at 0° for North and increasing clockwise (as we are familiar with from compass bearings), it begins at 0° for East and increases counterclockwise. (Think of angles measured on a coordinate plane.) It’s generally quickest to figure out by sketching on the back of an envelope, but the actual formula is that the Rotation Z = 450 – Azimuth, and subtract 360 from your answer if it is greater than 360.

A few common angles: common rotations ZIf I want a sun 50° above the horizon, coming from azimuth 337° (North-northwest), my light’s Rotation numbers will look like this:

light rotation for 50 at 337

Do a test render (F12)

Hit F12 to get a test render at 20% of the size of the final render. A little Render window opens and after a pause you (hopefully!) get something like this.

test render basic

The test render often catches common mistakes, like forgetting to set up your light or camera. It also reveals how the overall image will look, and so you can adjust your light strength, azimuth and elevation to increase or decrease shadows and the overall brightness of the image.

Render

Go back to the Output tab icon Output tab, and set percentage (%) to 100%.

Hit F12 to render.

Note that you can interrupt a render by hitting ESC, or by closing the render window.

I get this result:

basic render

Pressing Shift-S in the Render window (or Image>Save As) allows you to save the image. If I saved this as “Blender shaded relief 1x 50 337.tif” I would then make a copy of the TFW world file I created back at the beginning and name it “Blender shaded relief 1x 50 337.tfw” This makes something georeferenced that I can pull into my GIS.

That’s it! You made a hillshade in Blender!

Now, to consider the many other things we can fiddle with in Blender—material, number of lights, denoising, etc.—go to Part 3.

Shaded relief with BlenderGIS (2020), part 1

This tutorial replaces my tutorial from five years ago,  “Shaded Relief with BlenderGIS” tutorial. At this point (March 2020) I am using Blender 2.82 and the most recent BlenderGIS addon.  Daniel Huffman’s tutorial, which uses a different technique, has also been updated.

Blender 2-up denoised

It’s rather an understatement to say that Blender is complex. To keep things from getting out of hand, I’m going to take a straight path, right through the centre of the software, with a focus on getting out the other side with a completed render of shaded relief. But at the end, I will return and explore some of the interesting side trails that lead to the features that make the Blender hillshades so interesting.

Here’s the basic procedure I will follow to create a hillshade, once Blender is installed with the BlenderGIS addon. This can function as a simple checklist once you’ve become familiar with the process:

  1. Prepare your DEM
  2. Read the DEM into Blender as DEM raw data build
  3. Adjust Z scaling (vertical exaggeration)
  4. Create and adjust a georef camera
  5. Correct the final pixel dimensions of the output image to match the DEM
  6. Set final image type to be TIFF
  7. Turn the light into a Sun, and adjust its properties
  8. Do a test render at 10%
  9. Do a full render

Before you experiment with using Blender to make shaded relief, you will want to try the relatively simpler step of producing your own in a GIS software like QGIS. Consequently, this tutorial won’t explain a host of things that you would learn in that process: digital elevation models (DEMs), cell sizes, nodata values and projections, plus the art of combining shaded relief with other layers, stretching histograms and adjusting brightness and contrast. I will assume that you already know that, at its most basic level, creating shaded relief involves specifying a sun elevation and azimuth, and selecting a vertical exaggeration for your terrain.

I use the free GDAL command-line tools gdal_translate and gdalwarp to do re-projection and re-sampling, as well as produce world files. If the command line makes you queasy, QGIS offers a graphical front-end to these tools as well. (Processing>Toolbox, and search on “translate” or “warp.”)

Your first step will be to go to https://www.blender.org/ and obtained the free animation software Blender 2.8.

Installation of BlenderGIS

You only have to do this once, and then Blender is prepared to accept geographic data.

The BlenderGIS addon has installation instructions in its wiki at https://github.com/domlysz/BlenderGIS/wiki/Install-and-usage. Basically, they go as follows.

Go to the BlenderGIS site on github and hit the Clone or Download button, and then Download ZIP. You will receive a file called BlenderGIS-master.zip which you can store pretty much anywhere. Once you’ve installed BlenderGIS, you don’t need this file any more.

Within Blender,  go Edit>Preferences, and select the Addons tab. Click Install…, select the BlenderGIS-master.zip file, and click Install Add-on from File.

Once it is installed, be sure to check the box next to 3D view: BlenderGIS to enable it.

BlenderGIS installed

Note that a new menu appears in Blender: the GIS menu.

BlenderGIS menu

Two more tweaks will make Blender easier to use.

  1. Note the default cube that is always there in a new workspace. To delete it, hover the mouse over the cube and hit the Delete key.
  2. Find the Render tab on the right, and set Render Engine to Cycles. (If you have a good graphics card, you might also want to set Device to GPU compute)

setCycles

Now go File>Defaults>Save Startup File. This means that in the future, Blender will open with no cube, and with Cycles as the render engine.

Preparing your DEM

BlenderGIS can read DEMs whose data type is Int16, UInt16 or Float32. It works best with DEMs that are in a projection measured in metres (as opposed to degrees).

You’ll notice later that BlenderGIS claims that it can read a DEM projected in “WGS84 latlon” —in other words, EPSG 4326. I have found this works only when you are reading that DEM into a Blender scene already georeferenced in a metric projection.  You have to do a kind of complicated head-stand to make this occur, so we’ll take the simpler route and feed to BlenderGIS a DEM that is projected in metres. Typically for me (in northern British Columbia) this would be UTM Zone 9 North/WGS84 (EPSG 32609) or the BC Albers Equal-Area projection/WGS84 (EPSG 3005).

I’m not going to suggest converting your DEM to Float32 data type. This makes a difference only with large scale mapping (e.g., 1:25.000 or larger), where you might actually see “steps” in your hillshade where the elevations change by whole metres. If you are doing such mapping, consider floating your DEM first.

You will also want a world file for the DEM, so you can georeference the hillshade Blender produces. World files used to be pretty common, but now that raster data so often comes with its georeferencing built in, I will explain what they are.

The world file was a clever invention: a six-line text file that contained the cell dimensions and the coordinates of the image origin (upper left corner). It was paired with the image file by having the exact same filename, but with a TFW extension (for TIFF images). For example, you could have myDEM.tif and its associated world file myDEM.tfw. This allowed the georeferencing information to be held externally to the image file.

world file

When an image file has no georeferencing (i.e., it is an ordinary TIFF image, not a GeoTiff), the world file is all your GIS software needs to correctly place and scale the image. The only additional piece that it will have to ask you for is the projection of the image, in order to make sense of the world file.

For JPG and PNG images, the world file extensions are JGW and PNW, respectively.

We can use gdalwarp to re-project, re-sample and produce a world file. In the following example, I am re-projecting out of  lat/long/WGS84 (EPSG code 4326) into UTM Zone 9 north/WGS84 (EPSG code 32609), and re-sampling to 16 metre square cells. I will assume your DEM is a geotiff file.

gdalwarp  -s_srs EPSG:4326 -t_srs EPSG:32609 -r bilinear -tr 16 16 -of GTiff -co "TFW=YES" myDEM.tif myDEM_32609_16x16.tif

Where

-s_srs
the Spatial Reference System (projection & datum) of the original (the “source”)
-t_srs
the SRS of the result (the “target”)
-r
resampling method (bilinear, cubic, etc.)
-tr
resolution, in metres (give two values, one for X and one for Y)
-of
output format
-co
additional creation options (TFW=YES means create a world file)

The terms “projection/datum,” SRS (Spatial Reference System), and CRS (Coordinate Reference System) are interchangeable for our purposes.

So at the end of this preparation step I have three things:

  • a DEM in a metric projection (e.g., myDEM_32609_16x16.tif)
  • a corresponding world file (e.g., myDEM_32609_16x16.tfw)
  • I know the EPSG projection code for the projection the DEM is in (e.g., 32609)

Read the DEM into Blender as Raw DEM

On the GIS menu, go Import>Georeferenced raster, and navigate to your DEM file.

In the right margin, set Mode to DEM raw data build and select the correct CRS (coordinate reference system) for your DEM. (If you do not check the Build faces box, you will get a point cloud.)

importGeoraster options

If the CRS you want to use is not on the dropdown menu, you can add it by clicking the “+” button, checking the Search box, typing in the EPSG code for your CRS into the Query box, and hitting Enter. It should appear in the Results box, and then you just check Save to addon preferences and click OK.

addingANewCRS

After a pause, during which Blender builds a plane mesh out of your DEM, you hopefully see a grey, 3D rendering of your terrain, floating in a coordinate space.

initial screen

A quick tour of Blender

Let’s take a brief detour here and learn some of the the language of Blender, as well as some of its controls and unexpected behaviours.

Blender, being 3D animation software, is a world of vertices (points in space), edges (which connect vertices) and faces (which fill in between edges). In this case, the cells of the DEM have been translated into a square array of vertices, each with the appropriate height for the DEM cell it represents. These vertices have been connected with edges that form a square lattice like the cells of the DEM. Each square of four edges has a face.

The whole set of vertices, etc. is a mesh, specifically in this case a plane mesh.

In the lower right  corner, you will notice a status bar:

initial stats

 

This tells us that we have 588,978 vertices and 587,438 faces. My DEM happens to be 839 columns by 702 rows, and 839 x 702 = 588,978, so I could have predicted this number of vertices. This is useful to know so you can estimate before reading in a DEM, how much RAM it will require. On average (and this varies as the size of the DEM increases), every 3,500,000 vertices requires 1GB of RAM.

The number of faces is slightly less because no faces are generated by the final row and final column (839 + 702 = 1541, minus 1 face which would be generated by both the final row and final column, hence 1540 fewer faces than vertices.)

The status bar also tells us we are presently using 291MB of RAM. This will go up during the final render. Near as I can tell, available RAM is the only limit to the size of DEMs that Blender can handle.

You can see the vertices, edges and faces of your plane mesh if you zoom in and press TAB to toggle into Edit mode.

initial mesh in edit mode

Don’t worry if you can’t figure out how to do this just yet. If you do though, be sure to press TAB again to leave Edit mode before continuing on.

The upper right corner of Blender has a panel called an outliner that shows a tree of objects in the Blender world. At present it looks like this.

outliner

Notice that the plane (called “Thomlinson 32609 16m clip” in my case) is selected. (In the 3D view this is reflected by the fact that the plane is outlined in orange.) In addition there is a camera and a light.

Below the outliner is a Properties panel. On its left margin you will see a series of tabs identified by these icons. Hovering over them reveals their names.

Blender tab selector

We won’t have anything to do with some of these (Particles, Physics, etc.) The important ones for our purposes are Render, Output, Object and Object Data.

Note: sometimes the collection of tabs changes, depending on what object in the Blender world is selected. At this moment the plane is selected, but if the light were selected some of these tabs would disappear, and the Object Data tab would display a Light object data tab icon green light bulb icon.

The rest of the screen is taken up by the large 3D Viewport panel. Navigating in this is similar to using Google Earth, but a little different.

  • Rolling the mouse wheel zooms you in and out.
  • Holding down the mouse wheel button while moving the mouse in this panel allows you to rotate around objects.
  • To pan, hold down Shift and the mouse wheel button while moving the mouse to either side.

The other thing to know about Blender is that some keys work only when the cursor is over the 3D Viewport.

  • Numpad -. (period) centres the view on the selected object.
  • NumPad-0 toggles between this view and camera view.
  • TAB toggles the selected mesh in and out of Edit mode (and we won’t be using this).

Remember that Blender is a studio for animators. It is designed to allow you to sculpt new objects, and the generate many frames of animation in which various objects are lit by lights and shot from a specific camera. We are not going to create any new objects, and are going to shoot only a single frame; so there are many controls we will not use.

However, setting up our light and camera are crucial, and that’s what we do in Part 2.

 

 

Civilization in the United States

I came across this unexpected map the other day:

Markham, S.F., Climate and the Energy of Nations, p. 189, Intelligence in the Uniteed States

This is the kind of thing you just do not want to take out of context. So here’s the context.

This is from a little hardcover volume I picked up at a used-book store, called Climate and the Energy of Nations. It was written over the course of the 1930s by a British author named S. F. Markham, and published in 1947 by Oxford University Press.

Markham describes himself as a former aide to British Prime Minister Ramsay MacDonald, one of the founders of the Labour Party. Markham says he conceived a passion for investigating how climate—primarily temperature and humidity—affects the energy of nations. By ‘nations” Markham means both states and ethnic groups, the two being commonly conflated in the 1930s, when a state representing a specific nation was regarded as a formula for success. (E.g., Greece for the Greeks, Turkey for the Turks.) By “energy,” Markham is getting at—he never really defines this term—a sort of generalized ability to do stuff, be great, solve problems and get ahead. And, just to be forewarned, when he discusses energy it spills over pretty quickly into murky assertions about intelligence, civilization, and what nations produce “great men.”

Although ostensibly data-driven, the book somehow returns again and again to the conclusion that (since the invention of the chimney) northwest Europe has had the world’s best controllable climate coupled with the best reserves of coal and oil for heating—and therefore its people have the most “energy.” In a surprising coincidence, he discovers that the people in the world with the greatest national energy seem to be… the British.

isotherms in Europe Markham

Go figure.

So, anyway, what’s this about intelligence in the United States?

Well, Markham is quite interested in what happens when people from his ideal climate regimes move to less ideal climates. In the United States his data identifies New England and the Pacific coast as having ideal climates (north of the 75° F summer isotherm and south of the 10° F winter isotherm). The rest of the country, he concludes, at some point of the year is too cold or too hot for people to have optimal energy.

isotherms in US Markham

The map at the top of this post, “Intelligence in the United States,” is Markham attempting to support his climate point, which is that even when colonized by emigrants from the same part of northwest Europe, subsequent generations don’t do as well in certain parts of the United States.

But where did he get this state-by-state data on intelligence? Markham explains that it is from a contemporary of his, Frederick Osborn.

No assessment of a nation’s energy is or can be complete without some assessment of its culture or intelligence. … Possibly the best study ever carried out in this connection is that by Frederick Osborn of the American Museum of Natural History, who in 1933 produced an ‘Index of Cultural Development’ for the various states based on

  1. Mental test among schoolchildren
  2. Army intelligence tests
  3. Illiteracy percentages
  4. Magazine readers per 100 total population
  5. School teachers’ salaries
  6. Library statistics

(p.188)

(Notice, by the way, that Markham took Osborn’s Index of Cultural Development and mapped it as “Intelligence in the United States.” A bit of sleight of hand.)

I have to say, this is where I really put the book down and marvel at how far we’ve come since the 1930s. I mean, you would never today have someone taking these statistical indices and saying they represent cultural development—however much you agree that magazine readership and libraries are a good thing. Or, if you encountered it, you would assume it was a production of the far Right, not of a venerable institution like the American Museum of Natural History.

I’m not exactly sure what has happened in the world of ideas over the last ninety years to preclude this kind of thing now—something to do with a discrediting of positivism and the rise of critical theory. Maybe someone can fill me in.

But throughout the book one finds the evidence of what a distant and alien world the 1930s was. Markham, for example, inevitably speaks of “man:”

Since civilization is produced by men—and therefore by individuals—the question arose as to what conditions render it possible for a man to be at his best mentally and physically, for it seemed not illogical that where men enjoy conditions that permit them to be at their best there are present the raw essentials of civilization.

(p. x)

Although he rejects racial theories of predestination at the beginning of his book, the most incredible racial and ethnic generalizations seem to flow unconsciously from his pen.

Coolness is a prime essential to the physical work (including typewriting), without which all mental effort becomes, as the Arab’s, mere conversational speculation, barren in result.

(p. 215)

And then there’s his whole chapter on the ‘poor white’ problem.

There are, of course, great areas, such as India, China, and the Dutch East Indies, where no permanent white settlement has taken place, but in comparable areas, such as portions of South Africa, the southern states of the United States and the West Indies, there has arisen the problem known to the world as that of the ‘Poor White.’

(p. 134)

Well, back to the maps.

Continuing to demonstrate his climate theory, Markham presents two more maps that portray things he feels represent national energy: “Infantile Mortality 1930-34” and “Per Capita Income 1940.”

Markham, S.F., Climate and the Energy of Nations, p. 183, Infantile Mortality 1930-34Markham, S.F., Climate and the Energy of Nations, p. 192, Per Capita Income, 1940

[Canadians—well, certain Canadians—can take heart from that little caption below the figure on infant mortality: “If this map were extended to include Canada, British Columbia and the Ontario peninsula would be in the best (i.e., lowest) area.”]

Things aren’t looking good for the American South or mid-continent, but readers of Mark Monmonier’s excellent book How To Lie With Maps might have some intelligent questions to ask here about how Markham chose his data breaks. Was there a natural break at 58 deaths out of 1000 births, or was that chosen because it helped this map look like the isotherm map above?

Finally, he wraps it all up into an aggregate map whose data he says is a combination, with equal weighting, of the data behind the previous three maps. This map, he says, represents civilization.

Markham, S.F., Climate and the Energy of Nations, p. 195, Civilization in the United States

But of course when you look at it the first thing you’ll think is these are the two sides in the American Civil War. How Markham missed considering that the devastation of the South in that war might outweigh climate as an explanation for its scoring relatively poorly in the 1930s, I don’t know. I think he was doing what we should all never do in a Science Project: looking for data that confirmed his hypothesis.

[And, just a cartographer’s observation: that little wavy dividing line across Missouri—it’s quite suspicious. This is ostensibly state data: so how did they divide Missouri as if it were county data? Does this represent some embedded feud, some cultural rift,  between north and south Missouri that I don’t know about?]

At the end, Markham includes a chapter on air conditioning, which he thinks will “change the whole course of history in the United States,” a country that he sees as being for the most part burdened by its climate. However, he points out that while air conditioning makes office and factory work pleasant, it does not help the person working outside. In a cold climate, working outside keeps one warm, but in a hot climate “activity adds to one’s feeling of malaise.”

Climate and the Energy of Nations is kind of an entertaining read, if you’re able to tolerate how disturbing the 1930s was. I’m still not clear on whether Markham was invested in the status quo, or sought to change it. But, by choosing climate as his determiner of the “energy of nations” he picked something that (he would have thought) would never change. It’s a deterministic model of who will thrive. And to that extent, the book does get you thinking then about how hot and cold affect us all, and how climate change could yet have additional bad results we haven’t even clocked.

 

 

 

 

 

Georeferencing (registering) a map in a Lambert projection

This is a procedure that came up in a discussion with a friend, and I think it is tricky enough to be worth recording here.

Specifically we are using QGIS 3 to georeference a 1941 map of the Odessa, Ukraine, area, one of the 1:1,000,000 International Map of the World Series.

Odessa 1941 reducedThis map is bounded by latitudes 46° and 51° north, and longitudes 27° and 36° east.

We want as little loss of image quality as possible, therefore we want to avoid warping (re-projection). If warping the map were not a concern, we could georeference it in a geographic projection (e.g., EPSG 4326 or 4267) with a few ground control points (GCPs) in degrees, using the intersections of latitude and longitude lines.  But in this case we need to georeference it in the original Lambert projection, as it was printed. The transformation will be “linear” and in fact only a world file will be written. The world file will enable QGIS to read in the map image without warping it.

This will not be possible unless we can figure out what the original projection was. Fortunately at the bottom of the map the person who did the scanning has left the statement of projection.

projection text

A Lambert conic projection usually relies on four parameters. There are the two parallels of latitude at which the cone touches the globe: these are the two numbers listed here: 36° north and 52° 48′ (52.8°) north. They will be called lat_1 and lat_2 in the projection definition.

Then there are the coordinates of the origin point of the projection: lon_0 and lat_0. The meridian that runs straight down through the centre of the map is clearly the central meridian of the projection, because it runs perfectly vertically, the only longitude line that does so on the map. The centre of the map falls halfway between longitudes 31 and 32, so lon_0 is 31.5°.

Lat_0 is a little harder to figure out. However, in my experience it doesn’t really matter what you choose for lat_0. I chose 40° north.

The first step then is to create a new CRS (coordinate reference system) in QGIS for this Lambert Conic projection. We go Settings>Custom Projections, and click the “+” button to add a new CRS.

new CRS

Plugging in our parameters we determined above into the normal Lambert Conic definition, we get this:

+proj=lcc +lat_1=36 +lat_2=52.8 +lat_0=40 +lon_0=31.5 +x_0=0 +y_0=0 +ellps=intl +units=m +no_defs

QGIS will assign an EPSG number for your new CRS. In this case I got 100030, but they are always different (and greater than 100000).

The next step is for me to put my main map window in QGIS into the new projection, and open an array of latitude and longitude lines, such as the Natural Earth 1:10 million scale one-degree graticule layer. And I turn on labels for these lines so I can see what degrees I am looking at.

display graticule in custom projection

The reason I do this bears directly on the central technique we are going to use here. Because the original map is in Lambert, and I want to register it in Lambert, I will have to enter Lambert coordinates for each of the GCPs. But looking at the map I only see degrees: I don’t see Lambert coordinates.  Fortunately QGIS can tell me what the Lambert coordinates are for each grid intersection, as long as I am displaying my grid in the same CRS as I want to use to register the map. You can witness this by zooming in on and hovering your mouse over a grid intersection and looking at the Coordinate text box in the bottom margin.

noting Lambert coordiantes

There are the Lambert coordinates—at least for this specific Lambert Conic projection we are using—for 46° north, 26° east: -421460 metres east, 674061 metres north.

We don’t need to write these down though. We will get them assigned to the map image in a more elegant way.

Now it’s time to open the georeferencer (Raster>Georeferencer) and bring in the map image. Immediately you will be asked which CRS you want to georeference this map in. Choose the custom CRS you just made (in my case, 100030).

image read in

Before you place GCPs, go to the settings of the georeferencer and set up your transformation parameters to ensure no warp will result.

transformation settings

You want transformation type to be Linear. The resampling method is not important because no resampling should occur. The target SRS is your custom CRS. And you have checked the box called Create world file only (linear transform). (I also like to check the Load in QGIS when done box.)

Note that in the georeferencer the bottom line now looks like this:

georef parameters on status line

I’m going to place the first GCP in the lower left corner (the southwest corner), which is 46° north, 27° east. Before I do this, I go into the main map window and zoom in on just that intersection, to a fairly large scale, say 1:10,000.

Now in the georeferencer I place my GCP point on 46° north, 27° east, and QGIS asks me for the X and Y of that point. Or, I can click From map canvas.

first GCP

Once I click From map canvas, the georeferencer is hidden, the cursor becomes a cross, and I am invited to pick that same point on the main map canvas. I carefully click right on the intersection of 46° north and 27° east, and immediately the Lambert coordinates of that point as filled in for me in the dialogue box:

lambert coordinates transferred

I click OK, and I’m ready to do my next GCP. Remember, the first thing I will do is pan the main map to the coordinate intersection where I’m about to add this second GCP.

A linear transformation never requires more than three GCPs, but I like to do four so I get an estimate of my error. I do the four corners of the map.

At this point I can see, from the GCP table at the bottom of the georeferencer, how my error is.

GCPs created

The residual looks like 3 to 5 pixels, and the mean error is 6.5 pixels, quite acceptable in this image which is 4700 x 5700 pixels.

Now I hit the Start Georeferencing button (green triangle “play” button) and a world file is written. Because I’ve checked Load in QGIS when done, I immediately get asked for the CRS of the new georeferenced map, and again I select the new custom CRS I created for this map.

The map appears in the main map window, and I drag it under the graticule layer, to see that it is properly georeferenced.

map georeffed

There is no new raster image with this linear transformation, just a small text file with the same name as the map and a .WLD extension.

To clip off the collar of the georeferenced map, in case I want to mosaic it with adjacent maps, I can create a polygon in a temporary layer that covers the part of the map I want to keep…

ready to clip 0

 

and then do Raster>Extraction>Clip raster by mask layer. I like to check Create an output alpha band and Keep resolution of output raster.

ready to clip

And the result is a clipped raster with transparency in band 4.

clip finished

 

Octagons in Baku

Besides the window in the Divankhana, I saw a lot of geometric design in Baku based around octagons.

For example, consider this pattern in a window in the external courtyard wall at Baku’s Taza Pir mosque.

DSCF5870

This beautiful pattern, with its eight-pointed stars set within octagons, turns up on plate 67 in Jules Bourgoin’s 1879 Les Éléments de L’Art Arabe (which you can download from archive.org).

Bourgoin plate 67 3x3

It’s wallpaper group is the fairly common *442 (p4m) and it is generated by tessellating a square cell.

Bourgoin plate 67 single tileConstruction of this pattern is straightforward. The eight-pointed star in the centre is inscribed in a circle whose radius is one quarter the side of the square. The vertices of the octagon are found by extending the sides of the star. The rest of the construction lines are extensions of the octagon sides, and lines connecting star dimples that are three apart.

 

bourgoin pattern 67 construction lines

But one enters the Taza Pir compound via a stairway from the street. The panels in the stairwell are related, but subtly different from the window design!

DSCF5873

Seen head-on…

taza pit entryway pattern 4-tileWhat did they do here?  There is the same eight-pointed star in the centre, and the same enclosing octagon, but in this case they’ve trimmed back,  to the borders of the octagon, the square that one repeats.

taza pit entryway pattern square2

As a result, the square tile borders stand out strongly as lines, and around each point where four tiles meet we get a big diamond holding four small diamonds.

taza pit entryway pattern 12-tile

(This also belongs to the *442 wallpaper group.)

Now, in the Old City, I came across a piece of octagon-based decoration that illustrates what happens if one doesn’t follow best practices, as explained by Eric Broug.  This pattern involves starting with the same pattern as in the Taza Pir windows (above; a.k.a. Bourgoin’s Plate 67), but then repeating a somewhat random subset of it. In other words, incorrect tessellation.

DSCF5758

It would appear that the manufacturer of these pre-cast concrete blocks selected a piece out of the overall pattern that was not the all-important basic square, but rather a rectangle.

subset taken for pre-cast

Hence each of the concrete blocks looks like this:

single block

When you put them together, lines match up, but the effect of the original design is lost.

tiled pattern

The wallpaper group of this pattern would be*2222 (pmm).

Elsewhere in the Old City, there were pre-cast patterns that did tessellate pleasingly, again with octagons.

DSCF5747

But back at the Taza Pir mosque, I spotted this on an adjacent building, which I believe is Baku Islamic University:

Taza Pir

The grill pattern is octagons packed together, with squares in between; and eight radial lines emanating from the centre of each octagon. It’s basically the central column of this pattern:

Taza Pir end pattern

But look at what they did in the point of the arch. It’s beyond my knowledge to know whether this is best practice or not, but it is definitely creative.

 

 

 

A window in the Divankhana, Baku

Eric Broug, from the School of Islamic Geometric Design, writes that sometimes while travelling he sees a piece of contemporary Islamic geometric design and recognizes it as, well, let’s say less than best practice. (He sometimes posts images of this sort of thing on Instagram under the hashtag #cpigd, which stands for Common Problems in Islamic Geometric Design.)

What sort of mistakes are they? He explains on his page on best practices, but to me the most common one is incorrect tessellation, where a block of pattern is repeated in ways that cause lines to abruptly stop instead of continuing on.

I thought it would be interesting to take a look at the designs I found in Azerbaijan, in the cases where they were identifiably Islamic, and ask the same question: are they examples of good, traditional design.

So, bearing that in mind, let’s look at a grill window I found in the Divankhana of the Palace of the Shirvanshahs in Baku, Azerbaijan.

The Palace of the Shirvanshahs is the premier piece of historical architecture in the Old City of Baku, or, as it’s called in Azerbaijani, İçərişəhər or Icheri Sheher. The original buildings have been deduced to date from the 15th century, but most of the palace was heavily renovated/restored in the 20th century so it’s not immediately clear whether the details one sees are original or the work of a restorer.

The Divankhana (which is also variously called the Divan-Khane or Divanhane) is a structure in its own courtyard just off the outer courtyard of the palace. It holds a pleasing octagonal pavilion. whose original function is unknown (there are many theories). The pavilion is domed and consequently two stories in height, so it stands above the courtyard wall and can be seen from the palace courtyard. In one corner of the Divankhana, there is a staircase leading up to a locked door on the upper storey of the pavilion.

This window is at the top of that staircase, and looks out into the palace’s outer courtyard. Here it is seen from inside.

centre divankhana window from inside

What I thought when I saw this is There’s no way this can be a best practices design. There were so many wacky elements that I had never seen in an Islamic geometric design before. For one, I couldn’t find a single axis of reflection in it, anywhere. For another, it contained a number of strange, three-way intersections.

Is it a bad design, perhaps a modern artist not working within traditional lines, or could this be authentic traditional design?

Let’s look at how the pattern works.

The whole pattern begins with a pair of adjacent large octagons.

basic octagons

These fill the window, as shown.

Centred on the vertices of each octagon are eight smaller octagons, sized so that when they overlap they bisect each other’s sides.

secondary octagons full

As you might design it on paper

secondary octagons

Clipped to the window opening

These circles of smaller octagons define an empty space in the centre of each of the larger octagons, a space which is an eight-pointed star.

So far, so good, and very symmetrical and, in fact, infinitely tile-able. The cleverness comes with what they did inside each 8-pointed star.

They divided this space with a four-armed pattern which has rotational symmetry but no mirror symmetry.

They ran it in opposite directions in the top and bottom halves of the window. So, looking from inside the window, there is a counter-clockwise star in the top half and a clockwise star in the bottom half.

 

counterclockwise star

Counter-clockwise star

clockwise star

Clockwise star

stars full windowed

Full pattern

This is fairly mind-boggling for traditional Islamic design — I think. (I’m no expert.) The little pattern of four “hammerhead” shapes that circles within each 8-pointed star looks more M. C. Escher than standard geometric design.

Mathematically, we might ask: does this pattern at least pass the test of being able to be  continued in all directions?

Well, yes. One would simply add more big octagons above, below and to the sides, and then add the smaller octagons, etc. This could go on forever.

extended

Each big octagon hosts an eight-pointed star in its centre. If you alternate big octagons that host clockwise stars with big octagons that host counter-clockwise stars, following a checkerboard-like pattern, the centre of each star would be a four-fold centre of rotation. The corners where four octagon tiles come together are two-fold centres of rotation. And there are axes of reflection along the lines where adjacent big octagons touch.

extended2

Axes of reflection in blue; four-fold centres of rotation as green squares; two-fold centres of rotation as red diamonds.

Patterns with these axes of reflection and pattern of rotational centres belong (mathematically) to the “wallpaper group” known in orbifold notation as 4*2, and in IUC notation as p4g. (There are seventeen possible wallpaper groups.) Patterns that are 4*2 have a “twist” to them so that the basic square unit of the pattern is not the same as its mirror image.

In fact, the fundamental tile for this pattern, a tile from which the entire pattern can be generated through reflection and rotation, is triangular.

fundamental tile4*2 is an unusual wallpaper group for an Islamic geometric design, most of which are *442 (p4m) or *632 (p6m). But it’s not unheard of, and would not lead us to conclude that this pattern doesn’t use best practices.

And, then it gets a little more complicated.

At the Palace of the Shirvanshahs I never shot a picture of the outside of the window, so when I got home I found that, while there’s no Google Street view in Azerbaijan, someone had conveniently taken a photosphere in the main courtyard of the palace four months before I was there.

Here, in the photosphere, is the facade of the Divankhana as seen from the outer courtyard. The Divankhana is the building with the whitish dome.

Photoshere image palace of Shirvanshahs

Here’s a close-up of the three window openings on its second floor:

all three windows

Only the central one is full-sized; the outer ones are reduced in height, and the left one is completely blocked. I was inside the rightmost window, and in this image we see it as the reverse of my original picture.

right window divankhana

But look at the centre window.

centre window grill Shirvanshah palace divankhana

The stars in this window turn the same way. They’re both counter-clockwise (as seen from outside).

This pattern group, incidentally, would be 442 (orbifold) or p4 (IUC); it has no axes of reflection and two kinds of centres of four-fold rotation. I have not seen an Islamic geometric design before in the 442 wallpaper group.

So the windows are not the same. Prompting some contemplation.

This reminded me of a #cpigd instagram posting by Eric Broug at https://www.instagram.com/p/Biq9_d3la-Y/?tagged=cpigd

Design problems in Cairo. *All* the restored, replaced geometrical windows on *all* the restored buildings in this street in Darb al-Ahmar are identical. Originally, there would have been a great diversity of design.

This post prompted some pretty hot discussion in the comments about whether window patterns in a single building were traditionally uniform or not. Broug was of the opinion that

conventionally in Islamic Architecture, different patterns would be next to each other. possibly to encourage contemplation and reflection by looking for differences and similarities

At the Divankhana, the blocked and un-restored left-hand window suggests that this second storey of the Divankhana has not been aggressively restored, and that we’re not looking at recently restored windows here.

And I wonder if this subtle difference between the centre and right windows was an example of encouraging contemplation and reflection (no pun intended). It certainly got me contemplating.