Physically based rendering (PBR) of Egyptian collection at Brooklyn Museum

May 1, 2020

Written by Mohamed Abdelaziz Abdelhalim Mahmoud, Ministry of Egyptian Tourism and Antiquities, associated researcher at Indiana University Bloomington, associated researcher at CEAlex/CNRS

Introduction 

The article and the accompanying tutorial videos (originally presented at the ARCE 2020 Virtual Annual Meeting) survey a number of new methodologies and applications of photogrammetry in Egyptology and Virtual Heritage in general.

First, it presents methods for photorealistic rendering of virtual objects in the Brooklyn Museum Egyptian Collection so that they can be seamlessly composited into images of the real world. 

Second, I present a methodology for the virtual anastylosis (re-erection) of five sculptural fragments belonging to Ramesses II located in Tanis, Egypt. Fundamental to the method is photogrammetry to create 3D modeling of the fragments. Through this process we are able to visually re-assemble the fragments without physically intervening directly on the pieces.

Third, I present a method of ‘’Virtual’’ Reflection Transformation Imaging (RTI), which is achieved by applying ‘’Virtual’’ lighting to a 3D model, rather than having to photograph objects in the field with varying light sources. 

Forth, considerable research has been conducted on 3D surface reconstruction using multiple images of fixed viewpoints and varying lighting conditions. In this presentation, a new image-based modeling technique based on Polynomial Texture Mapping (PTM) is presented. This technique allows virtual automated reconstruction of highly detailed 3D texture-mapped models. Using only computer graphics and photogrammetry, this methodology is able to represent all visible inscriptions and features of small objects. Furthermore, invisible details of the object might also become visible by using some specific filter and altering the light in RTI after stripping any color and texture information from it.

Physically based rendering (PBR) of Egyptian collection at Brooklyn Museum

Left: before applying PBR maps, right: after applying PBR maps

To generate predictable and consistent results, we study physically based methods, which simulate how light propagates in a mathematical model of the augmented scene. This computationally challenging problem demands both efficient and accurate simulation of the light transport in the scene, as well as detailed modeling of the geometries, illumination conditions, and material properties of scenes or objects to be reconstructed. In this presentation, we discuss and formulate the challenges inherent in these steps and present several methods to make the process more efficient. We illustrate this with some objects from the work-in-progress 3D imaging project undertaken by the Egyptology Program at Indiana University Bloomington. 

Virtual anastylosis

Left: broken parts, right: after reassembling the broken parts

In our archaeological work we often find sculptures that require the reassembly of their fragments and the restoration of their lost parts. 

Traditional restoration work has certain drawbacks: a) it is expensive because it involves the use of real materials; b) if not executed by a professional, the results may be harmful to the piece itself; c) it may be difficult to obtain a view of the final result of the restoration before it is completed. Virtual restoration, although it cannot replace the actual restoration when the piece is in danger of deterioration, can be a very useful tool to plan interventions and to show the hypothetically real aspect of our objects without having to intervene directly on them (respecting the principle of minimum intervention). 

Thanks to the virtual restorations, we will be able to think about the reassembly of sculptural fragments in a more leisurely and scientific way, observing more easily the correspondences between the pieces that compose it. Using this method, we will not only obtain a 3D model as a result of our restoration, but we can also contemplate the visual impact of the restoration of our piece made with a particular material.

Virtual Reflection Transformation Imaging (VRTI)

Simulation on how VRTI works

Virtual Reflection Transformation Imaging (VRTI), allows automated reconstruction of highly detailed 3D texture-mapped models using only computer graphics and photogrammetry. 

This methodology is able to represent all visible inscriptions and features of small or large objects. 

The case study used in this presentation is an obelisk base dating to the reign of King Seti I. This base is located in a secret room under Pompey’s pillar at Alexandria, Egypt. 

The Hieroglyphic inscription is very hard to read; using RTI combined with photogrammetry allows us to see new inscriptions that could not be seen by the naked eye (All rights: CEAlex/CNRS).

Unrolling cylinder shapes 

Unrolling the cylinder base 

Digital epigraphy of cylinder shapes plays an important role in the documentation of archaeological finds. It is essential for showing the form, decorations and inscriptions around a cylindrical body. In the Roman theatre in Alexandria there is a part of a red granite column (length 123 cm, diameter 89 cm) bearing a practically invisible Latin inscription. The traditional way of rendering decorations/inscriptions on a cylinder into a 2D plane involves intensive manual labor and rich specialist skills. For archaeological specialists, one major difficulty lies in manually unrolling cylinder decorations from a 3D surface onto a 2D plane with limited distortions. A useful way to document archaeological finds is the creation of a so-called rollout; this is created either by manual drawing or from photography. On the other hand, generating a digital 2.5D rollout from geometry acquisitions represents a more reliable framework, which assists the scholars in the task of iconographic interpretation. In this example, we address the problem by proposing an automatic method to unroll decorations/inscriptions on 3D cylinder shapes into 2D and 2.5D planar space. Using 137 images processed on Agisoft Metashape V1.5 and Meshlab 2018.04 combined with Virtual RTI has revealed seven lines of Latin inscription, which were not possible to see with the naked eye.

digitalEPIGRAPHY would like to thank the Ministry of Egyptian Tourism and Antiquities, Steve Vinson and Bernard Frischer at Indiana University, Bloomington and the Centre d'Études Alexandrines for allowing us to publish this article. For further examples of the author’s work, please visit his personal and Indiana University’s Sketchfab pages.

Go back to our previous entry to see the animated coffin models the author created for the Egyptian Coffins Project at the Harvard Museum of the Ancient Near East. 

2 comment(s)

leclercq jean philippe

May 3, 2020

All good so far on your method, now go on unreal and you re then good !

May 14, 2020

Thank you for reading us and especially for your suggestions! Best, digitalEPIGRAPHY

Gertie Werner-Bäumer

May 4, 2020

Interested in old Egyptian history

May 14, 2020

Glad to hear that! You came to the right place, please, stay with us to see more and more of it!

Leave a comment(We'll keep your email address private)

Want to stay in the loop and hear first-hand about digital EPIGRAPHY's latest articles and future plans?