Lost but Found in the Photogrammetry World

The Quandary:

Have you ever broken or lost a small part of an important object you value? Perhaps the strap of that beautiful watch you got from your grandma or the battery cover for the back of your remote control? You looked for it everywhere, but the part was too “insignificant” to be sold on its own. Or it just wasn’t the sort of thing that anyone would expect to need a replacement.

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

Last semester at Williams College, Chris Koné, Associate Professor of German and Director of Oakley Center for Humanities & Social Sciences, had a similar experience. He lost an integral part of his desk that allows him to keep his files neatly stored and organized (shown on picture). Desperate to have a place for the files and papers scattered miserably on the floor, Prof. Koné looked in a brick and mortar NYC office parts store, as well as on Amazon, eBay, and other e-commerce websites, but alas, the object was nowhere to be found. It had become obsolete!

The “obsolete plastic object”

The “obsolete plastic object”

Determined to leave no stone unturned in finding a replacement for the obsolete plastic object, Prof. Koné did what any sensible person with access to the Makerspace would do – he asked for a 3D-printed model of the object! And it is here that he met me, an intern working at the Makerspace over the summer. In the process of helping him, I learned about multiple methods of photogrammetry and created a significantly more efficient and streamlined workflow for the Makerspace. 

Some Background

As a new student worker with zero knowledge about photogrammetry and 3D printing, David Keiser-Clark, the Makerspace Program Manager, thought this project would be just the right amount of challenge for me. Photogrammetry is the process of creating a 3-dimensional digital model of an object by taking dozens or hundreds of photos of the object from different angles and processing them with software to create a digital spatial representation of the object. Doing this project would be a good introduction to the 3D digital world while allowing me to get acquainted with the Makerspace.

If you have tried photogrammetry, you know that some of the most difficult objects to work with are those that are dark or shiny. This object was dark and shiny! When an object is dark, it becomes difficult for the software to distinguish one feature on the object from another, resulting in an inaccurate digital representation. Likewise, light is reflected when an object is shiny, resulting in images that lack details in the shiny areas. Thus, you can imagine how challenging it is when your object is both shiny and dark!

Step 1

The first step was to figure out how to reduce the darkness and shininess of the object. To kill both birds with one stone, I covered the object with white baby powder, a cheaper alternative to expensive photogrammetry sprays used in industry. The powder’s white color would help eliminate the object’s darkness and offer it some helpful texture, while its anti-reflective nature would reduce shininess. After several attempts to completely cover the object, this method proved ineffective as the powder would not stick to the object’s smooth surface. A little out-of-the-box thinking led me to cover the object with matte blue paper tape, which proved very effective as the tape’s rough texture allowed minimum light reflection. 

obsolete plastic object coated with blue tape

obsolete plastic object coated with blue tape

A Bit of Photography 

Milton taking pictures for photogrammetry

Milton taking pictures for photogrammetry

Now that the two biggest giants had been slayed, it was time to move on to the next step: taking pictures of the object. Taking shots for photogrammetry is very similar to doing stop-motion animation. You take a picture of the object, move it at a small angle (between 5-15 degrees) by hand or with a turntable (a rotating disc), and take another picture. Then you repeat this process until the object has rotated completely, change the camera angle (e.g., by taking shots from the top of the object), and redo the whole process again. This can be quite tedious, especially if you have to do it by hand, but luckily for me, the Makerspace had recently bought a new automated turntable, so I didn’t have to rotate the object manually. I also got to be the first to create a documentation guide for other Makerspace student workers to more easily be able to utilize the turntable in the future!

Alignment Process

Once the photos were ready, the next step was to analyze them using photogrammetry software. I turned to Agisoft Metashape, a powerful program that receives pictures of an object from different angles and analyzes them to create a 3D depiction of the object. The software first finds common points between the various images, called anchor points, and calculates their relative distances, allowing the software to place them in a 3D place. This process is called alignment.

Unfortunately, despite my efforts to aid the software by covering the object with matte blue tape to reduce its shininess and darkness, the obsolete plastic object did not align properly in Metashape. While I could not pinpoint the exact reason, I suspect it was due to its hollow shape, which made it challenging for the software to capture points on the inner surfaces, especially the corners. It was quite disappointing to get these results, especially after having had to wade through Metashape’s jungle of commands, but that was certainly not the end of it all. I decided to try a different approach – raise an older desktop 3D scanner from the grave!

Misalignment in Metashape

Misalignment in Metashape

The Hewlett Packard (HP) 3D Structured Light Scanner

The 3D David Scanner (now called the HP 3D Structured Light Scanner) works by projecting light onto a subject and capturing the reflection. It measures the time taken for the light to return, determining the distance of each point. These points, represented as XYZ coordinates, are collectively used to digitally reconstruct the object in a 3D space. I intended to use the structured light scanner as an alternative to Metashape software because it allows more control over the alignment process. For example, you can select two specific images you want to align and tell the software how you want them to get aligned. In addition, the scanner features a projector that sheds light on the project you’re scanning, as well as a calibrated background panel, allowing for greater detail to be picked up. 

HP 3D Structured Light Scanner

HP 3D Structured Light Scanner

A Bit of Scanner Surgery

Using the HP 3D Structured Light Scanner

Using the HP 3D Structured Light Scanner

The Makerspace’s HP scanner unfortunately hadn’t been functional in over three years. The camera was not working, and the scanner’s software could not make exports due to licensing issues. I updated the device’s software and installed new camera drivers, and in no time, the scanner was fully functional again. I then scanned the obsolete plastic object with the structured scanner. Unfortunately, the results were unsatisfactory. It resolved the prior alignment issue with Metashape, but the digital model had thin walls and holes on some of its surfaces, making it impossible to print. 

Thin walls and holes in the structured light scanner model

Thin walls and holes in the structured light scanner model

Building from the Ground Up with Fusion 360

Results of different lighting setting in HP 3D Structured Light Scanner

Results of different lighting setting in HP 3D Structured Light Scanner

After trying out different strategies with the HP 3D Structured Light Scanner, such as different light settings, but still not getting good results, David suggested a different method – building the model from scratch! Excited to try out new software (and get a break from the structured scanner!), I began exploring Fusion 360 tutorials and documentation. Autodesk Fusion 360 is a Computer-Aided Design (CAD) software with applications across various sectors, including manufacturing, engineering, and electronics. It allows one to create a simple sketch of a model and build it into a solid model with precise dimensions. You can even add simulations of real-world features such as material sources and lighting. 

Of course, this new, complicated, piece of software came with its challenges. For example, I had to know the dimensions of the fillets (the arcs) inside and outside my object. A little creativity combined with a pair of vernier calipers and a piece of paper did the job. Another challenge was understanding the timeline feature of Fusion 360, one of the most important features of the program, which allows you to record your progress and go back to a certain point. Researching online and getting help from a friend (shoutout to Oscar!) with more experience in Fusion 360 proved helpful in better understanding the software. 

Successful Fusion 360 model of the obsolete plastic object

Successful Fusion 360 model of the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

The Obsolete Plastic Object Was No Longer Obsolete

Finally, after several days of learning Fusion 360 and incrementally building a model, the obsolete plastic object was no longer obsolete. I produced an accurate model of the object and printed several copies, which Professor Koné was more than happy to receive. His files had regained their home, and time spent scouring eBay and Amazon for a nameless object had come to an end!

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

Conclusion

My experience working on photogrammetry and 3D modeling at the Makerspace was certainly full of twists and turns but definitely worth it. I learned how to use more than three very complicated software applications, significantly improved the Makerspace photogrammetry procedure (reduced a 3-month process to 1-2 days), and approached new challenges with an open mind.

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Next Steps

I look forward to exploring other methods of photogrammetry, particularly ones that require less equipment, such as those that use only a smartphone. Reality scan is one promising alternative that can create lower-resolution scans and models in less than 15 minutes. With new technologies coming out every day, there are many avenues to explore, and I’m excited to discover better methods. 

Screenshot: Experimenting with the Reality Scan smartphone app

Screenshot: Experimenting with the Reality Scan smartphone app

Truly-Local Internet: The PiBrary Project

Raspberry Pi 4 Model B

Figure 1: A Raspberry Pi Model B

If a local organization has important information for its neighbors, is there a way it can broadcast directly to them without bouncing the data to Toronto and back? I grew up here in the Berkshires, and have recently joined the Office of Information Technology (OIT) at Williams. Thinking about Williams College’s commitment to community service and support, my project goal was to demo a low-cost, low-maintenance device which a local organization could use to easily broadcast information and resources over WiFi directly to nearby cell phone users through familiar, standard methods — (1) connecting to a WiFi network, and (2) navigating to a website address in a browser — without needing national or global infrastructure, or specialized equipment or technical skills on either side of the connection. Such an “internet-in-a-box” model could have useful applications in emergency scenarios, but also could provide curated information or resources digitally to multiple people in other specific, time-limited places and moments — for example, at a festival, workshop, teach-in, or other community event.

Figure 1: Wilmington VT, a mere 40-minute drive from Williams.

Figure 2: Wilmington VT, a mere 40-minute drive from Williams.

Let’s give this idea some real context. Imagine a small town in nearby southern Vermont – say, Wilmington. It’s late August, and a storm rips through, dropping 8 inches of rain in 24 hours, washing out roads and bridges, and knocking cell towers and internet infrastructure offline, leaving you without any connectivity for days. Local fire, rescue, and police services, town government, even your electric company, typically use websites, social media, and text messages to communicate critical information — but now, those methods don’t work. Where can you go for information regarding emergency food, shelter, medical care? Is the water safe to drink? When will power be restored? Where are the downed power lines and flooded roads? You’re both literally and figuratively in the dark.

No need to imagine: this actually happened in 2011, with Tropical Storm Irene. Superstorm Sandy in 2012 presented a similar case. And just this April, a single fiber optic cable damaged by a late-season snowstorm shut down Berkshire businesses for a day.

Truly local connections literally do not exist on the modern Internet. Are you on campus and want to view the williams.edu website? That data lives on a server in Toronto, and travels through 6 or 7 intermediary servers (including places like New Jersey and Ohio) before it lands in your cell phone’s browser (also producing 0.5g CO2 each visit). Under normal conditions, this globalized infrastructure is reliable, and has important benefits. But it’s useful to think about the edge cases. Climate change is bringing more unpredictable severe weather events. Rural areas like ours are often underserved by internet service providers (ISPs), which often have little financial incentive to invest in maintaining or expanding infrastructure.

This post offers a guide to creating your own DIY hyper-local webserver. If you can write a webpage (in plain HTML) and are open to my guidance in using a command line: follow along!

Required Equipment and Steps

Required hardware: Raspberry Pi 4 Model B, Power Supply, PiSwitch, and 32GB MicroSD card with Raspberry Pi OS.

Figure 3: Required hardware: Raspberry Pi 4 Model B, Power Supply, PiSwitch, and 32GB MicroSD card with Raspberry Pi OS installed.

I decided to build using a Raspberry Pi 4 Model B single-board computer. The Pi is about the size and weight of a deck of cards, and runs a version of Linux, an open-source operating system (OS).

There were two tweaks I determined were necessary to make the Pi ready to play the role I imagined. First: I needed to enable the Pi to act as a webserver, rather than a desktop. Second: I needed to adjust the Pi’s built-in WiFi connection to broadcast, rather than receive, a WiFi signal.

Tweak 1: Webserver Setup

Globally, 30% of all known websites use Apache, an open-source webserver software launched in 1995. I installed Apache on the Pi through the command line, using the command:

sudo apt install apache2

Now, any content I wanted to broadcast to other users I could simply place into the preexisting folder at /var/www/html/. I wrote a home page, an “about” page, and created 4 subfolders loaded with some open-licensed content (PDFs, audio and video files). You can check out my content (and adapt it if you like!) at github.com/gpetruzella/pibrary.

Tweak 2: Adjusting WiFi to Broadcast

I then used the following on the command line to tell the Pi to broadcast as a WiFi hotspot:

sudo nmcli device wifi hotspot ssid <my-chosen-hotspot-name> ifname wlan0

(The Pi’s built-in WiFi device is named “wlan0”; I chose to name my hotspot “pibrary”.)

Now, the Pi was broadcasting a WiFi hotspot, which other devices would be able to see and connect to. But… I wanted to make sure this happened automatically every time the Pi was switched on. To accomplish that, I needed to find the new hotspot’s UUID, then use that in one final configuration step. I found the hotspot’s UUID by running:

nmcli connection

This displayed a table with multiple rows: I found the “pibrary” row and copied its UUID. Then, I ran:

sudo nmcli connection modify <pibrary’s UUID> connection.autoconnect yes connection.autoconnect-priority 100

With this modification completed, simply switching on the Pi will automatically start broadcasting a WiFi signal (as a “hotspot” or source), with no extra steps.

Connecting from a Nearby Mobile Phone

screenshot of the homepage at pibrary.local.

Figure 4: Viewing the homepage at pibrary.local.

Now, the Pi was both “serving” webpages, and broadcasting a WiFi hotspot. Even without Internet — such as during power outages — any nearby user could find and connect to the WiFi hotspot on their phone… but what “web address” would they type in the browser to reach the content? The final piece of the puzzle requires knowing the Pi’s “hostname”. When I first set up my Pi, I gave it the hostname pibrary (just like the hotspot). The domain name

.local

is a special-use domain name reserved for local network connections. So, once a cell phone has connected to the “pibrary” WiFi hotspot, that user can type

pibrary.local

into the browser to reach the homepage I had set up in Tweak 1. Finding your own Pi’s hostname is as easy as entering the following on the command line:

hostname

Experiencing the Local Website

Below are a few screenshot examples of navigating the PiBrary resources from an Android phone.

screenshot of streaming an open-licensed video.

Figure 5: You can stream an open-licensed video.

screenshot: accessing directories of open-licensed learning resources.

Figure 6: You can access directories of open-licensed learning resources.

screenshot: viewing a PDF.

Figure 7: You can view PDFs.

Challenges and Future Expansions

One limitation of this implementation is the range of consumer-grade WiFi: the maximum signal distance is roughly 90 meters under ideal conditions. The HaLow (802.11ah) standard offers up to 1km of range; but today’s consumer cell phones aren’t built to use that standard. One solution could use HaLow to send data from one Pi to another – say, one in town hall and another at a fire station (if each has an inexpensive HaLow module installed), with each one serving its own nearby neighborhood over standard WiFi. Alternatively, even off-the-shelf home mesh or WiFi “extender” hardware could improve the reach of this model without significant cost.

A second challenge: maintaining and editing content. My ideal use-case was for non-technical community members (e.g. in public safety or town government) to easily push information, announcements, etc. However, this demo succeeded because I knew how to edit and manage webpage content directly (i.e. by writing HTML). For a non-technical community member, using the bare Apache webserver this way could be a significant barrier to easy deployment or quick posting, especially in the environment of a public emergency. To address this, I would like to explore whether the YunoHost open-source server management application is compatible with the PiBrary project. YunoHost offers a very familiar and robust web editing interface, plus other possible additional services, such as email hosting.

In terms of sustainability, a lightweight Pi-hosted local site radically reduces the total carbon impact of each site visit, even taking into account the fact that Williams’ website is “hosted green”. A fascinating expansion of this project would be to use sustainable web design principles and standards as a standard part of the college’s digital presence.

Finally, privacy. Unlike ordinary internet browsing, which has many elements protecting and encrypting the flow of data, this demo creates a simple, direct, unencrypted WiFi network. (You may have noticed the “insecure alert” icon next to the address in some of the screenshots above.) In the absence of any technical trust guarantees, this setup is suitable only for very specific cases where the connection between server and client is based on human trust – like in a local community!

Thanks to David Keiser-Clark, Makerspace Program Manager, and to my colleagues on the Academic Technology Services team, for their support in developing this prototype.

Simulating Spaces with AR

Fig.1 This is me standing in front of Chapin Hall, using my tablet to view my AR model (see below) superimposed as a "permanent object" onto the Williams campus.

Fig.1 This is me standing in front of Chapin Hall, using my tablet to view my AR model (see below) superimposed as a “permanent object” onto the Williams campus.

At age nine, I had a bicycle accident (and yes, for those who know me, I can’t swim, but I can pretty much ride a bike, thank you!). It was not that unusual compared to how you usually fall from a bike: I was going up perhaps faster than what my mom allowed me at the time, and I bumped into a really, really, BIG rock. In great pain, someone nearby picked me up and, crying very much, I said: “I want to go home, give me my tablet.” A very Gen-Z answer from me, and I don’t recommend that readers have such an attachment to their devices. But let’s be honest—would I have been in such a situation at the time if I was peacefully playing the Sims instead of performing dangerous activities (such as bike riding) in real life? Is there a fine line between real and virtual? Can I immerse myself in a virtual environment where I *feel* like I drive without actually driving *insert cool vehicle*?

 

 

Fig. 2: I created this sketch of "maker space" in Procreate on my tablet.

Fig. 2: I created this sketch of “maker space” in Procreate on my tablet.

Augmented Reality (AR) is something I have been interested in learning more about as an internet geek. Although I count stars for a living now (I am an astrophysics major), I am still very much intrigued by the world of AR. Whenever there is a cool apparatus in front of me, I take full advantage of it and try to learn as much as I can about it. That’s why one of my favorite on-campus jobs is at the Williams College Makerspace! It is the place where I get to be a part of a plethora of cool projects, teach myself some stuff, and go and share it with the world (i.e., as of now, the College campus and grand Williamstown community!). Fast forward to my sophomore year of college, Professor Giuseppina Forte, Assistant Professor of Architecture and Environmental Studies, reached out to the Makerspace to create a virtual world using students’ creativity in her class “ENVI 316: Governing Cities by Design: the Built Environment as a Technology of Space”. The course uses multimedia place-based projects to explore and construct equitable built environments. Therefore, tools like Augmented Reality can enhance the students’ perspectives on the spaces they imagine by making them a reality.

This project could not have been possible without the help of the Makerspace Program Manager, David Keiser-Clark. He made sure that there was enough communication between me and Professor Forte so that deadlines for the in-class project completion were met, as well as the Williams College “Big Art Show”. In short, my role was to help students enhance their architectural designs with augmented reality simulations. This process involved quite a few technical and creative challenges, leading to a lot of growth as a Makerspacian, especially having no background in AR before taking part in this project!

Choosing Tools and Techniques

My role in this project was to research current augmented reality softwares, select one, and then teach students in the course how to utilize it. In consultation with Giuseppina and David, we chose Adobe Aero because it’s free, easy to use, and has lots of cool features for augmented reality. Adobe Aero helps us put digital stuff into the real world, which is perfect for our architectural designs in the “ENVI 316: Governing Cities by Design” course. I then set up a project file repository and inserted guides that I created, such as “Interactive Objects and Triggers in Adobe Aero” and “How to Use Adobe Aero”. This documentation is intended to help students and teaching assistants make their own AR simulations during this — and future — semesters. This way, everyone can try out AR tools and learn how to apply them in their projects, making learning both fun and interactive.

AR Simulations: My process

Fig. 3: I have successfully augmented reality so that, viewed through a tablet, my "maker space" 3D model now appears to be positioned in front of Chapin Hall at Williams College.

Fig. 3: I have successfully augmented reality so that, viewed through a tablet, my “maker space” 3D model now appears to be positioned in front of Chapin Hall at Williams College.

Once we had all the tools set up with Adobe Aero, it was time to actually start creating the AR simulations. I learned a lot by watching YouTube tutorials and reading online blogs. These resources showed me how to add different elements to our projects, like trees in front of buildings or people walking down the street.

Here’s a breakdown of how the process looked for me:

  1. Starting the Project: I would open Adobe Aero and begin a new project by selecting the environment where the AR will be deployed. This could be an image of a street or a model of a building façade.
  2. Adding 3D Elements: Using the tools within Aero, I dragged and dropped 3D models that I previously created in Procreate into the scene. I adjusted their positions to fit naturally in front of the buildings.
  3. Animating the Scene: To bring the scene to life, I added simple animations, like people walking or leaves rustling in the wind—there was also the option to add animals like birds or cats which was lovely. Aero’s user-friendly interface made these tasks intuitive, and videos online like this one were extremely helpful along the way!
  4. Viewing in Real-Time: One of the coolest parts was viewing the augmented reality live through my tablet. I could walk around and see how the digital additions interacted with the physical world in real-time.
  5. Refining the Details: Often, I’d notice things that needed adjustment—maybe a tree was too large, or the animations were not smooth. Going back and tweaking these details was crucial to ensure everything looked just right. Fig. 1, 2 & 3 show an example of a small project I did when I just started.

Final Presentation: The Big Art Show

Figures 4 and 5 show side-by-side comparisons of real-life vs AR spaces as presented in the Williams College “Big Art Show” in the fall semester 2024. The student who used the AR techniques decided to place plants, trees, people and animals around the main road to make the scene look more lively and realistic. 

Fig. 4: Exhibition at the "Williams College Big Art Show" featuring 3D printed houses and buildings alongside a main road.

Fig. 4: Exhibition at the “Williams College Big Art Show” featuring 3D printed houses and buildings alongside a main road.

Fig. 5: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Fig. 5: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Lessons Learned

Reflecting on this project, I’ve picked up a few key lessons. First, jumping into something new like augmented reality showed me that with a bit of curiosity, even concepts that seem hard at first become fun. It also taught me the importance of just trying things out and learning as I go. This project really opened my eyes to how technology can bring classroom concepts to life—in this case, the makerspace!—making learning more engaging. Going forward, I’m taking these lessons with me.