Reconstructing the Western High Gate Digitally

Reconstructing the Western High Gate Digitally

Projects December 16. 2019

Written by Ariel W. Singer, Egyptologist, PhD candidate at the University of Chicago and Epigrapher at the Epigraphic Survey of the Oriental Institute, University of Chicago

Fig. 1: The Western High Gate at Medinet Habu, looking east; photo by O. Murray.

In the 2018-19 season at the Epigraphic Survey we began to seriously experiment with the idea of creating 3D photogrammetric models for the blocks of the Western High Gate (WHG) at Medinet Habu Temple. This project stemmed from the issues that arose as we were trying to accurately and clearly record the dimensions and architectural features of the large number of blocks there, some of which are quite sizable and some of which are quite fragmentary. 

The background of the Western High Gate

The Western High Gate, and its significantly better preserved counterpart the Eastern High Gate, were uncovered and analyzed by Uvo Hölscher, in the course of his five year excavation of the Medinet Habu complex, from 1927 to 1933.[1] The existence of the Eastern High Gate had long been known, but it was not until the end of the project (the last three months to be specific) that he turned his focus to the western side of the enclosure wall and found the blocks and semi-intact foundation of the WHG. 

Fig. 2: The ground plan of the Temple of Ramses III at Medinet Habu, as reconstructed by Hölscher in OIP 21 (pl. 2); the Western High Gate is highlighted in green by us.

The precise history of the WHG is not at all clear. The earliest damage likely happened at the end of Dynasty 20 in a significant conflict and, although the gate may have continued to be used subsequently, at some point after Dynasty 21 it was closed up by reusing some of the fallen blocks. The remaining preserved blocks were tumbled about the complex, primarily to the west, when Hölscher began his excavation. 

Although Hölscher included descriptions and photos of a few of the decorated blocks in his publication, this was only a fraction of the material and the rest was left unrecorded. Thus, in 2013, the Epigraphic Survey began work at the WHG under the direction of epigrapher Jen Kimpton to organize and analyze the material there - including creating a database and initiating a full epigraphic program.

Beginning the photogrammetry process

One of the goals of this work was to create an on-site open-air museum – we knew we had a nice grouping of blocks that appeared to join based on the decoration, however before moving any of them (one of them was quite large and some were rather fragile), we wanted to be able to create a 3D reconstruction to test these joins out. This was particularly important because they are all decorated on multiple sides, which is difficult to make clear in two dimensions. In addition to this, we wanted to be able to visualize some of the very large blocks that we thought joined these, but which require serious equipment to shift around. 

Fig. 3: Our core set of blocks from the interior of the Western High Gate; the original photo is from Hölscher’s publication OIP 55 (pl. 26), but he neglected to mention there that these blocks are all decorated on their sides as well – the sides have been added here from photos by Y. Kobylecky, as has the broken bottom block identified as a join by J. Kimpton. Note that the broken bottom block is decorated on the back as well, with a part of the exterior carving, which allowed us to place this group into the larger context of the WHG.

Owen Murray, one of our photographers who specializes in photogrammetry, kindly agreed to give a few of us a tutorial in how the MetashapePro (then Photoscan) software works, and how best to shoot an object in order to create a model. Using this foundation we decided to run a few tests to try and come up with the best workflow for our specific environment. 

Camera choices: One of the key issues that we are dealing with is the close proximity of many of the blocks to other blocks (or walls, depending on the exact location) – this means that trying to get a good range of shots with a larger camera was often impractical for us. As a test, we first shot a moderately sized block (with some obstructed areas) using both a digital SLR and an iPhone (a 7 – we are hoping to try out the new iPhone 11 Pro with its improved camera this season) in order to compare the quality of the texture of the final model. We found that although the texture was somewhat sharper using the SLR, we had more difficulty getting the images to align and had areas with poorer coverage because the camera was too large to maneuver in the confined spaces (this would undoubtedly be less of an issue for a professional photographer, but we are trying to create a streamlined system that allows multiple members of our group to contribute). 

Target choices: We also had to figure out the best positioning for our targets. We knew early on that we would use the ‘tritarget’ (designed for us by our photographer Hilary McDonald), but, since we were shooting in the round, we wanted to see if it would be necessary to use more than one of these on the block. Fortunately, even with the tritarget on only one face we were able to achieve error no greater than 5mm (and often quite a bit less – we also always manually double check the measurements in the field to ensure that our models are as accurate as possible). As for the placement of these targets, we always choose an undecorated area with no architectural features, preferably near to where the block number is written (this way if there is any question about which block we have shot, we know exactly where to look to identify it). 

Fig. 4: Block MHbl 3299, seen above in the upper left corner of fig. 3, with the tritarget marked in the green circle; photo by A. Singer (taken as a part of the photogrammetric process).

To shoot the block we use the standard pattern of creating a hemisphere of photos at a regular distance from the block, with each photo overlapping in the area of coverage by approximately two-thirds. We then also shoot a series of photos closer to the block to increase the resolution of the model’s texture. If the block is small enough (and sturdy enough), after shooting the ‘top’ half, we will flip it over (leaving the marker in exactly the same place) and shoot the ‘bottom’ – this will allow us to create a complete model later on in processing. 

The specifics of using MetashapePro to create the blocks and merge the chunks to make a complete 3D model will be addressed in a future article. However, there are two important things to note before exporting any models from MetashapePro:

  1. If the model is not in close proximity to the grid (model > show/hide items > show grid), it can be a problem when using the mesh processing software.
  2. If you have more than eight characters in a file name some programs will not read the texture files.

The Digital Reconstruction Process

Once the models are built we can export them from MetashapePro as obj files (file > export > export model > save as Wavefront OBJ) and begin the process of the digital reconstruction. 

We have tried out three different programs to use as a ‘digital sandbox’ and each one has upsides and downsides. None of them are specifically designed for this particular use, and all of them have a number of uses that we have not dug into (either because they are not useful for us, or because we simply have not yet worked out all of the functions). 

Meshlab: The first one that we tried was MeshLab – this is a very clean open source program, it loads quickly and easily, and we still use it as a fast way to check our meshes when we have changed something. It is possible to load multiple meshes into this program and then arrange them, however we personally find the process to be cumbersome.

Fig. 5: MeshLab is a good option for viewing models, but unwieldy for manipulating them (here the Manipulator Tool is in the green square and the ‘translate’ option has been selected). These are the top two blocks of our core section.

The basic steps are simple: select the Manipulator Tool (it has a tri-color x-y-z axis icon), and then choose T to move (translate) the object, and R to rotate (and S to scale, although since our models are all to the same scale we do not use this feature). The actual movement of the object is ok, but once you have activated the Manipulator you cannot change your view point without switching to the non-editing setting, which we find to be cumbersome when trying to make small adjustments of objects in relation to each other; nor can you switch easily between translate and rotate. Also if you do not hit return once you have made your adjustments it will not save them, and you will have to do them again. One last, very annoying, issue is that MeshLab does not have an undo option for all actions, so if you have made a mistake in the positioning, you often just have to start all over.

Adobe Dimension: The newest option, Dimension, is part of the Adobe CC package (so definitely not open source). This program seems like it is primarily for designing 3D objects (it includes a large number of preconstructed models, such as coffee bags, bottles, or t-shirts), but it also provides the easiest interface for moving multiple objects within a space. This is largely because there are two easy ways to switch from ‘editing mode’ (where you can move the object) to ‘orbit/pan’ (where you can move yourself around the object). The first is by hitting the 1 key to choose ‘orbit’ and then the v key to switch back to ‘edit,’ but the best is by using the right mouse button (or a two finger click, hold and drag on the trackpad).

Fig. 6: Dimension is one of the newest options for 3D software, and has the easiest interface to manipulate the blocks (see the x-y-z icon in the center), but if it is not in the render preview (which has to reload with every move), the quality of the image is low. The place to adjust scale is in the green box, and the render preview button is in the green circle.
 

To move the objects themselves, you simply click on the object with the select arrow and a convenient x-y-z image appears (it is automatically set to rotate around the center, but this can easily be adjusted in the properties menu). With this you can quickly rotate (using the arced line) and move (using the axis line to shift in only one axis or just clicking anywhere in the object and dragging). (Scaling can also be done by using the boxes on the axis lines.) There are a variety of other useful features, most of which have shortcuts (and which are explained in jargon-less language when you hover over them with your mouse), but which are not useful for our current reconstruction process (if you have always wanted to see what your object would look like as Valencia marble or covered in green leather, though, this is your opportunity).

There are however a few issues (some more serious than others) with this program. The first is an easy fix, but frustrating until you figure it out – the program does not have a way to tell the units of an object when it is imported (you can change the scene units, but this does not impact the object) so if you are exporting out of Metashape, which ‘thinks’ in meters, Dimension will not read that, and will take every meter as a centimeter, giving you a very small object. To correct for this, in the properties menu that appears when you select an object you can just scale all dimensions to 100 (if you don’t do this, when you export the new obj, the measurements will not be correct).

The other issue is more an irritation than a long-term problem (and there may be a way of fixing it that we haven’t figured out yet): the resolution of the texture file in Dimension is terrible unless the render preview has been activated. Unfortunately if preview is on, every time you move the camera or object it has to go through the rendering process (which can take anywhere from a few seconds to a minute or so). To turn on the render preview, click on the half pixelated icon in the upper right corner – if you click and hold this icon you can also change the resolution of the render preview, the 1/4 setting loads faster but, frankly, is not that much better than the un-rendered view

Blender: The final program that we use is the open source software Blender – this is by far the most complicated of the options, but also the most powerful. Blender has recently come out with a new version (2.81), which has a number of improvements over the last iteration – it is still very much designed for people who are already knowledgeable about 3D software (and the extensive and technical vocabulary that goes along with it), but the basic functions are now much simpler to locate and use. In the new Blender, it is easy to pan, zoom, and rotate the camera around the object (on a track pad, it is pinch to zoom, two finger drag to rotate, and shift plus two finger drag to pan; but there is also a x-y-z image in the upper right hand corner that also allows for easy rotation, and a zoom and hand button under these that allow you to zoom and pan).

In order to move the objects within the space, the new Blender has a transform tool (the icon on the left side of the screen with a circle with arrows in it around a box), which functions in the same way at the one in Dimension (arrows to move along axis lines, arcs to rotate, and boxes to scale). Using the icons above this one, you can also opt to do only one of these functions at a time.  

Fig. 7: Blender is a pretty powerful program, but can be confusing if you are not familiar with the vocabulary; the new version is however much easier to navigate in than its predecessor. To move the camera (your view point), you can use the x-y-z image in the upper right hand corner, with the zoom and pan buttons below it. To move the objects you can click on the ‘transform’ icon (selected in blue on the left-hand side of the screen), then use the x-y-z arrows that appear to move, rotate, and scale. Highlighted in the green box is ‘display render preview’ and the menu arrow that allows you to access the ‘scene world’ button.

We would quickly like to point out some of the tricks that we have learned that make Blender more functional for us:

  1. The spacebar is your best friend – it pulls up a search window that lets you find possible actions, and shows you the shortcut for the action (and make sure when you are opening Blender for the first time that you set the spacebar to ‘search’ – you can also make this adjustment in edit > preferences > Keymap, then switch Spacebar Action to ‘search’)
  2. To delete an object, select it, hit x and then select delete, but your cursor has to be over the object screen, not over the menus. 
  3. To see the best version of your texture on your model, choose ‘display render preview’ (the far right button in the upper right hand corner of the screen, before the menu arrow – also, in the menu that this arrow reveals, choose ‘Scene World’).
  4. Often the ‘transform’ x-y-z image is not in the center of the object you want to adjust, which, especially when trying to rotate, can be irritating. To correct this right click on the object to pull up the ‘object context menu,’ move your mouse over ‘set origin’ (or just hit o on your keyboard), and choose ‘origin to geometry.’         

The one thing that was easier in the old version of Blender was the light – we prefer to have a shadeless universal light to view our blocks, which in the old version meant you just had to delete the lamp, but in the new version, requires going to ‘world properties’ in the menu on the right (the little pink globe), changing the surface to ‘emission’ and the color to a lighter shade of gray (we also sometimes ‘strengthen’ the color to lighten up the whole thing, but we still find the color to be a bit off). There may be a much easier and better way to achieve this, but we have yet to come across it. 

There are a huge number of other things that you can do in Blender (including unwrapping the UVs to bake a drawing into a model, which we will cover in a later article), but trying to figure out what everything does is not straightforward, and can be overwhelming at first. 

Overall: Each of these programs has its own strengths and weaknesses, however the new version of Blender has really improved its functionality for this purpose and it has the added advantage of being open source. If you are looking for one program to focus on, this would probably be our choice, however if you already have Adobe CC, Dimension is also worth investigating.

Finishing up

Once the models that you are working with have all been arranged to your satisfaction, you can export them from any one of these programs (in all cases, the process is essentially file > export and you can export to any file format that is convenient to you, but the obj format is the most widely used).

When we had our core set of blocks arranged, we were able to say decisively that the joins were all good and proceed with recreating this section of wall in real life. 

  

Fig. 8: Here are the blocks being reconstructed at the Western High Gate under the supervision of Lotfi Hassan, our head conservator and Reis Bedoaowi abd Alaa; photos by J. Kimpton.

We have also continued to add blocks that we suspect join to this group into our digital sandbox and, after assessing their fit, have been able to significantly expand our digital reconstruction. As we are able to create more and more models, we will continue this process, and hopefully be able to digitally recreate a much larger section of the gate. All of our final collections have been shared on Sketchfab, and are available for anyone to examine and use.

Fig. 9: Here are all of the blocks that we currently have models of from this part of the wall of the WHG– many are too large to be moved in real life without heavy equipment, so this is the next best way to visualize how they fit together – Here they are in Sketchfab, which allows them to be viewable by anyone.

Bibliography: 

Hölscher, Uvo. The Excavation of Medinet Habu. University of Chicago Oriental Institute Publications. 5 vols. Chicago: University of Chicago Press, 1934-1954.

[1] Uvo Hölscher, The excavation of Medinet Habu, 5 vols., University of Chicago Oriental Institute publications, (Chicago: University of Chicago Press, 1934-1954), v. 21, 41, 54-55, 66.

 

WHAT TO READ NEXT

2 comment(s)

Tom van eynde

May 04. 2020

Looks like a lot of extra work because your trying to get around photographing the blocks the standard Chicago house way. I understand the problems when you can’t move the blocks. Seems like your more interested in the 3 d aspect of the block than in the lighting of the decoration. I’m sure there are reasons for that.But sometimes you get caught up in the tech and lose track of the finished product. I bet Ray wishes he had photoshop when he started Luxor temple project. Please forgive my old school approach to these problems. It’s been a long time since I’ve been in the field but I was always told about the CHICAGO House standards so thats my default.If your in Chicago as I am I would love to see what your doing.

Jen Kimpton

July 21. 2020

I just wanted to clarify that the blocks of the Western High Gate have also been photographed professionally (and beautifully!) on film and in digital format. The film photographs are what we use for drawing enlargements and they will be the photographic plates in the publication. The article above is about how we’re learning to use 3D models as a tool to help us make (or confirm) joins between blocks that are too big or too fragile to move around on a whim. I’s been wonderfully useful in that way, but I don’t think any of us would argue that this is a substitute for formal photography.

leave a comment

Your email address will not be published.

* Required fields