Pixels or Petals? Comparing Physical vs. Digital Learning Experiences

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Learning has not been the same since COVID. Just like the vast majority of students around the world, my classes were interrupted by the COVID pandemic back in 2020. After having classes canceled for two weeks, and in an effort to get back on track, my high school decided to go remote and use Google Meet as an alternative to in-person learning. Remote learning did not feel the same — this included using PDF files instead of books for online classes, meeting with peers over video conferencing for group projects, or taking notes on my computer and studying only digital material for exams. I cannot say that I was not learning, because that would not be the best way to describe it, but I can say that something rewired my brain and I have not been able to go back. Due to COVID and other factors, the use of simulations in schools may increasingly supplant hands-on learning and more research needs to be done not only on the implications for content knowledge but also for students’ development of observational skills.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Last week, Williams College students Isabelle Jiménez ‘26 and Harper Treschuk ‘26 visited the Makerspace to start a project for their Psychology class, “PSYC 338: Inquiry, Inventions, and Ideas” taught by Professor Susan L. Engel, Senior Lecturer in Psychology & Senior Faculty Fellow at the Rice Center for Teaching. This class includes an empirical project that challenges students to apply concepts on children’s curiosity and ideas to a developmental psychology study. Isabelle and Harper decided to analyze the ideas of young children following observations with plants, more specifically: flower species. The students plan to compare how two groups of similarly aged children interact with flowers. The first group will interact with real flowers and will be able to touch and play with the plants (Fig. 1), and the second group will interact with 3D models of the plants using electronic devices (iPads) that enable them to rotate and zoom in on the flowers (Fig. 2).  By analyzing the interactions of children with real and simulatory flowers, they hope to extend existing research on hands-on and virtual learning to a younger age range. Valeria Lopez ‘26 was the lead Makerspace student worker who assisted them in creating the necessary models which will be covered in this blog post. 

I was excited to learn about Isabelle’s and Harper’s project and quickly became involved by assisting them in using Polycam 3D, a mobile photogrammetry app. This app enabled us to quickly create three-dimensional digital models of physical flowers. We opted for photogrammetry as our method of choice due to its versatility—it can model almost anything given enough patience and processing power. Photogrammetry involves capturing a series of photos of an object from various angles, which are then processed by software to create a coherent three-dimensional digital model. To meet our project’s tight deadline, we decided to experiment with smartphone apps like RealityScan and Polycam, which offer a user-friendly approach to 3D object creation. While our standard photogrammetry workflow in the Makerspace provides greater precision, it requires more time and training because it uses  equipment such as a DSLR camera, an automated infrared turntable, a lightbox, and Metashape software for post-processing. Despite initial setbacks with RealityScan, we successfully transitioned to Polycam and efficiently generated 3D models. These models serve as educational resources for children, and since precise accuracy wasn’t necessary for this project, using a mobile app proved sufficient. This rapid approach ensures that the 3D models will be ready in time for the educational teach-in Isabelle and Harper are organizing at Pine Cobble School.

Process

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

We began our project by utilizing the photography equipment at the Makerspace in Sawyer Library to capture images of flowers in vases. Initially, we were careful to avoid using the provided clear glass vases because translucent and shiny objects are more difficult for the software to render correctly into accurate models. With the guidance of David Keiser-Clark, our Makerspace Program Manager, we selected a vase that provided a stark contrast to both the background and the flowers, ensuring the software could differentiate between them (Fig. 3 & 4).

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower's movement.

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower’s movement.

Setup

Our setup involved placing the flowers on a turntable inside a lightbox and securing the smartphone, which we used for photography, on a tripod. 

Troubleshooting

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Our initial approach involved seeking out a well-lit area with natural lighting and placing the plant on a table with a contrasting color. However, we soon realized that the traditional method of keeping the phone stationary while rotating the subject wasn’t optimal for smartphone-designed software. While this approach is commonly used in traditional photogrammetry, our mobile app performed better with movement. Recognizing this, we adjusted our strategy to circle the subject in a 360-degree motion, capturing extensive coverage. This resulted in 150 pictures taken for each flower, totaling 450 pictures. Despite initial setbacks with two different photogrammetry apps, our second attempt with Polycam proved successful, allowing for more efficient and accurate processing of the models (see Fig. 5).

Results

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

We did not expect to need to do so much troubleshooting! In all we spent 45 minutes loading and testing three different apps, before settling on one that worked successfully. We are extremely happy with the end results. As a final step, I uploaded our three models to SketchFab to ensure that the children could easily access them across different devices (Fig. 6).

Next Steps

  1. Engage with Isabelle and Harper to gather their general impressions on the kindergarteners and first graders’ interactions with the real and digital 3D models while still maintaining complete confidentiality of the results.
  2. Take the opportunity to delve deeper into mobile photogrammetry tools and document the process thoroughly. Share this documentation with other makerspace student workers and the wider community to facilitate learning and exploration in this area. 
  3. Collaborate with other departments on similar projects that utilize 3D objects to enhance educational experiences, fostering interdisciplinary partnerships and knowledge exchange.

Makerspace Collaborating on Sustainability Projects

Last spring semester, the Makerspace @ Williams College pivoted to focus on academic projects that support teaching and learning goals; previously, this focus had been an aspirational goal. The Makerspace Program Manager, David Keiser-Clark, and his team of amazing student workers, now support a dozen interdisciplinary academic and campus projects at a time. A quarter of these projects support sustainability, or specifically the Zero Waste Action Plan, including: (1) a three-college collaboration to create an eco-friendly deterrent for Japanese Beetles in our community garden; (2) a prototype to upcycle plastic bottles into 3D printer filament; and (3) a set of laser engraved wood signs, sustainably harvested from Hopkins Forest, for a Stockbridge-Munsee led garden video and audio tour at the Mission House in Stockbridge, MA. Below, you’ll find a brief spotlight on each project, and possible ways we might build on these initial efforts.

E4 Bug Off Team Project : Mitigating Japanese Beetle Damage

E4 Bug Off Team Project, installed in the Williams College Community Garden : Mitigating Japanese Beetle Damage

E4 Bug Off Team Project, installed in the Williams College Community Garden

The E4 Bug Off Team is a collaborative environmental project between engineering students from Harvey Mudd and Pomona Colleges, and students working with the Williams College Makerspace and Zilkha Center. The engineering students researched and developed a prototype that would safely repel Japanese beetles to hopefully stop them from defoliating raspberry bushes in the Williams College Community Garden. The Makerspace used 3D printers to create the parts and subsequently assembled the model. Zilkha Center interns then deployed the model in the gardens. The device is designed to be low-maintenance and only needs the reservoir filled weekly with 100% peppermint essential oil. Japanese beetles, in addition to other bugs and mammals, dislike the smell of the mint family, and the concentrated peppermint essential oil diffuses into the air via permeable wicks that extend from the reservoir tank.

One of five engineering diagrams from the 30-page E4 Bug Off Team Project.

One of five engineering diagrams from the 30-page E4 Bug Off Team Project.


The initial model was installed in the garden in July 2022, at the tail end of the raspberry season, and immediately leaked. This spring (2023), the Makerspace re-printed the reservoir tank with a higher density (50% solid as compared to 15%), tested the model and, after 24 hours, found it to be 100% water-tight. This second model was introduced into the garden with mixed results: the functional model performs as intended, but the impact is difficult to measure without a control plot or method of measuring beetle activity this year. 

In addition to recording measurements of a control plot, additional steps to increase effectiveness could include fabricating additional models to better saturate the air within the berry patch or returning the project to the engineering team for design modifications. The final version would be printed with ASA filament, which is physically stronger and UV/moisture resistant, as compared to PLA or ABS filaments.

To learn more about this project, read this blog post by Makerspace student worker Leah Williams.

Contributors: Harvey Mudd College (Students: Javier Perez, Linna Cubbage, Eli Schwarz, Stephanie Huang; Professors Steven Santana and TJ Tsai), Pomona College (Student: Betsy Ding), Zilkha Center (Students: Martha Carlson, Evan Chester, Sabrina Antrosio; Staff: Tanja Srebotnjak, Mike Evans, Christine Seibert) and Makerspace (Student: Leah Williams; Staff: David Keiser-Clark)

Polyformer: Sustainable 3D Printing at Williams College

While completing a month-long Zero Waste Internship at the Zilkha Center (through the ’68 Career Center’s career exploration Winter Study course), Camily Hidalgo pitched building a machine to convert waste plastic into usable 3D printer filament. The project aligns with the Williams College Zero Waste Action Plan, which is based on the sustainability strategy in the Williams College Strategic Plan. She envisioned this as being a collaborative effort between the Williams College Zilkha Center and the Makerspace. 

After researching several options, she selected the Polyformer because it is an open-source (publicly accessible) project that seeks to create a DIY kit, composed of standard and commonly found parts, able to convert and upcycle plastic bottles (waste) into usable 3D printer filament. This project was launched in May 2022 and has quickly amassed more than 4,000 people who follow and/or contribute to the project (on Discord), while a core group of dedicated volunteers develop the project.

Many of the 78 printed parts that will be assembled into the Polyformer.

Many of the 78 printed parts that will be assembled into the Polyformer.

The intended outcome is to build a machine, based on standardized specifications, that effectively slices a water bottle into a half-inch wide ribbon, and then feeds that ribbon through a heated funnel, called a hot-end, to extrude it as 1.75mm PET filament. Camily seeks to create a working prototype to demonstrate our ability to disrupt our plastic waste stream and upcycle that into usable 3D printer filament. Approximately 40 bottles are required to create a standard 1 kg roll of filament, (enough to print 6 of the aforementioned beetle devices!). This project seeks to raise awareness that we can both reduce the quantity of waste that the college ships offsite while using that waste to create new filament and thereby purchase less of that virgin material from China. Upcycling waste can reduce the environmental impacts associated with the extraction of raw materials and product manufacturing as well as the significant carbon footprint associated with shipping those products to us from the other side of the globe.

Polyformer diagram for building the "Right Arm Drive Unit Subassembly."

Polyformer diagram for building the “Right Arm Drive Unit Subassembly.”

Camily Hidalgo notes that this project is complicated because the design is constantly being improved. Additionally, it requires 3D printing 78 individual parts and then assembling those with a kit of sourced materials that includes a circuit board, LCD screen, a volcano heater block and 0.4 mm hot end, a stepper motor, stainless steel tubing, bearings, neodymium magnets, lots of wires, and lots of metal fasteners.

This project began last spring semester and, as of this summer, all 78 parts have been locally printed. Assembly has begun, and will be completed during the fall semester, followed by actual testing under a science lab exhaust hood to safely capture antimony, a VOC released when PET reaches its melting point. 

To learn more about this project, read this blog post by Makerspace student worker Camily Hidalgo.

Contributors: Zilkha Center (Student: Camily Hidalgo; Staff: Tanja Srebotnjak, Mike Evans, Christine Seibert), Makerspace (Students: Camily Hidalgo, Milton Vento; Staff: David Keiser-Clark), Chemistry (Professors: Chris and Sarah Goh; Staff: Gisela Demant, Jay Racela)

Laser Engraving: Stockbridge-Munsee Garden Video and Audio Tour

Yoheidy Feliz connecting a red maple slab to a slanted locust base, with dowels and wood glue.

Yoheidy Feliz connecting a red maple slab to a slanted locust base, with dowels and wood glue.

The Stockbridge-Munsee Community Historic Preservation Office summer intern, Yoheidy Feliz, reached out to the Zilkha Center for help with creating locally sourced wooden signs for a permanent video and audio tour at the Stockbridge-Munsee Garden in Stockbridge, MA. She received a dozen sugar maple and red maple discs, plus locust wedges, all sustainably harvested from already fallen trees in the Williams College Hopkins Forest. 

Yoheidy approached the Makerspace and, in collaboration with expertise and tools from the Science Shop, learned how to use an industrial laser engraving machine to etch a welcome sign with QR code, as well as multiple audio guide messages, onto sanded wooden discs. She attached these discs to sloped wooden bases (“wedges”) using woodworking dowel joinery, wood glue and a mallet, and then applied a natural, non-toxic preservative coating of Walrus-brand tung oil. 

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

The day after completing all of this work, she installed these at the Mission House garden, and then created these stunning video and audio tours to guide local and remote viewers through the gardens.  

To learn more about this project, please be on the lookout for an upcoming Makerspace guest blog post by Yoheidy Feliz.
Contributors: Stockbridge-Munsee Community Historic Preservation Office (Staff: Bonney Hartley, Historic Preservation Manager; Student: Yoheidy Feliz), Science Shop (Staff: Jason Mativi, Michael Taylor), CES & Zilkha Center (Staff: Drew Jones, Christine Seibert), Makerspace (Staff: David Keiser-Clark)

Cloning the Last of its Kind

Milton Vento ‘26 using photogrammetry to create a 3D object

Milton Vento ‘26 using photogrammetry to create a 3D object

Most recently, Associate Professor of German, Chris Koné, approached the Makerspace with a problem: all but one of the file hanging clips to his beloved office desk had broken. The result: piles of overflowing manila folders surrounding his desk, cramping his office and style. He searched Ebay, Etsy, and Amazon, but was unable to find replacement parts. He even visited a store in NYC that specializes in providing office parts. Alas, the parts were obsolete. So he approached the Makerspace and asked if we might be able to replicate his last remaining viable part.

Milton Vento and Chris Koné hold the original and cloned objects.

Milton Vento and Chris Koné hold the original and cloned objects.

Milton Vento, the Makerspace’s summer student worker, took on the task as his first project, using it as an opportunity to learn photogrammetry, an accessible and low-cost method of taking many photographs of an object from varying angles and then using software to stitch them together into a 3D digital object. He expanded the project by testing four different methods of creating 3D objects using: standard manual DSLR photogrammetry with Metashape software; photogrammetry using a smart turntable that rotates and sends an infrared signal to the DSLR camera, causing it to iteratively release the shutter and then advance the turntable several degrees and then repeat that process; an older DAVID5 object scanner; and the RealityScan app that requires only a smartphone. This exploration resulted in two distinctly more efficient workflows that will become standard use this fall in the Makerspace. 

He also successfully re-created a 3D object of the final remaining desk part, and printed and delivered a half dozen of these parts to Chris. Should any of these ever break, the file can easily be retrieved and re-printed. 
Contributors: German Department (Professor: Chris Koné), Makerspace (Staff: David Keiser-Clark, Student: Milton Vento)

Future Project Ideas

One upcoming and likely collaboration between the Makerspace and the Zilkha Center would be to laser etch additional sustainably-harvested Hopkins Forest wood slices to create signs for the Williams College Community Garden. Additionally, the Zilkha Center, Makerspace and MCLA Physics and Environmental Center may brainstorm the possibility of creating a larger prototype for upcycling plastic into pellets. The pellets could then be used for injection molding, given to local artists for artwork, or sold regionally; this idea was sparked by Smith College’s collaboration with Precious Plastics


You can find this blogpost and other sustainability projects at sustainability.williams.edu.

From Teeth to Time: Discovering Siwalik Hills’ Past Through Archaeology

How did we get here? Where do we come from? What does our future encompass? As an aspiring scientist, I have always been fascinated by these (and many more!) questions about the evolution of humanity and the cosmos. Specifically, the modern ways in which experts around the world are working towards finding a unifying, concrete answer about the theory of evolution and dispersal of early humans. To my pleasant surprise, scientists at Williams College are making wonderful discoveries and progress on this topic, and I was able to contribute — even just a tiny bit — to some of their work this semester!

Some Background

Anubhav Preet Kaur

Anubhav Preet Kaur pictured working at the ESR Lab at Williams College

Scientists believe that early humans dispersed throughout the world because of changing global climates. The specific routes that these early humans took are still inconclusive. However, there are several hypotheses about the possible areas they inhabited, given early Pleistocene evidence of hominin occupation in those areas. Thus, the hypothesis I will explore in this blog post will be related to the pieces of evidence of hominin occupation from regions around the Indian subcontinent: i.e., Dmanisi, Nihewan, and Ubeidiya—just to name a few sites.

One of the supporters of this hypothesis is Anubhav Preet Kaur, an archeologist conducting a paleoanthropological research project that seeks to identify if the Siwalik Hills in India were a likely dispersal path for early humans. As Anubhav states: “The fossils of Homo erectus, one of the first known early human species to disperse outside of Africa, have been discovered from Early Pleistocene deposits of East Europe, West Asia, and Southeast Asia, thereby placing Indian Subcontinent in general—and the Siwalik Hills, in particular—-as an important dispersal route.” The problem is that no fossil hominin remains or evidence attributed to any early hominin occupation have ever been uncovered in that area. Thus, her project seeks to paint a clearer prehistorical picture of the region’s ecology by precisely dating faunal remains from her dig sites. She hopes to indicate if the Siwalik Hills, already famous for yielding many paleontological and archeological finds over the past hundred-plus years, would have had fauna and ecological conditions during these migratory time periods that would have supported early humans. And precisely dating these faunal remains requires the skills of Dr. Anne Skinner, a renowned lecturer at Williams College. 

Anne is a distinguished Williams College emerita chemistry faculty member who is an expert in electron spin resonance (ESR) and specializes in applying ESR techniques to study geological and archaeological materials. Anubhav is a Smithsonian Institute Predoctoral Fellow and presently a doctoral student at the Indian Institute of Science Education and Research in Mohali, India. Anubhav spent three seasons, between 2020-2022, doing paleontological field surveys and geological excavations at the Siwalik Hills region in India. She led a team of undergraduate and graduate field assistants and volunteers in searching for clues that might indicate if the conditions were suitable for hominins. Ultimately, she brought a selection of her fossils to Williamstown, MA, so that Anne could begin to teach her the process of utilizing ESR to date her objects. 

What is ESR?

ESR is a technique used on non-hominin remains that allow scientists to measure the amount of radiation damage a buried object—in this case, several partial sets of animal teeth—has received to provide insights into its geological and biological history. The Siwalik Hills region is a particularly important site for archaeologists because they are home to a variety of rich deposits of fossil remains that date back from the Miocene to Pleistocene periods; however, Anubhav’s sites in particular, contain remains from the Pliocene and Pleistocene. The periods that she studied in her site are relevant as those are the periods in which she theorizes a dispersal could have happened, making the study of the remains more effective. The region is located in the northern part of India (near the border of Pakistan) and covers an area of about 2,400 square kilometers. The fossils Anubhav and her team collected (~0.63-2.58 Myr) include the remains of Pleistocene mammals, such as bovids, porcupines, deer, and elephants. Thus, they and have been used as a tool for archaeologists to learn more about the region’s past climate and ecology

The Story Starts Here

On January 9, 2023, Anne and Anubhav visited the Williams College Makerspace and inquired if we could create high-quality 3D models that would persist as a permanent scientific record for four sets of Pleistocene mammalian teeth that would soon be destroyed as a consequence of ESR dating. Electron spin resonance is currently the most highly specific form of dating for objects up to 2 Mya, and is used only with animal remains as the dating process requires crushing the material into powder in order to analyze with highly sensitive equipment. Hominin remains are widely considered too rare and valuable to allow destructive dating, while animal remains are relatively more frequent. Creating high-quality 3D objects allows researchers with a means to effectively consult and do further research on a digital reconstruction of the model at a future date. In addition, the 3D objects are the basis for creating 3D prints of the object for physical study and handling. 

Furthermore, ESR is a rare and expensive technique that is only available at a limited number of sites throughout Australia, Japan, Brazil, Spain, France, and the United States. Williams College is, in fact, the only facility in all of North America with ESR equipment, and Anne is the only ESR specialist at Williams. 

My Job

This spring, I collaborated on this 3D modeling project with David Keiser-Clark, the Makerspace Program Manager. We divided the job so that each of us was in charge of producing two unique 3D models of the highest quality. We began the project by holding a kickoff meeting with Anubhav and Anne to discuss project needs and to receive four sets of prehistoric teeth. Throughout the project, we held additional meetings to discuss progress and, finally, to present finished 3D digital and printed models. Despite the fact that this was my first photogrammetry assignment, I embraced the challenge head-on, working autonomously and engaging with stakeholders whenever necessary.

To build the 3D models, I used a photographic method known as photogrammetry. This required putting together many orbits of images using software to create a three-dimensional object. I participated in two workshops offered by Beth Fischer, Assistant Curator of Digital Learning and Research at the Williams College Museum of Art, to develop knowledge of this procedure. Her thorough understanding of the intricate workings of our photogrammetry software, Agisoft Metashape, was incredibly helpful. Beth was a great resource and was willing to meet with us numerous times. Moreover, I shared what I learned with David (and the entire Makerspace team) so that we could update the Makerspace’s new documentation on photogrammetry. By sharing my experiences, I helped to guarantee that the documentation addressed a wide range of challenging edge-case scenarios and would serve as a thorough and useful reference for future student workers.

Here is a walkthrough of the photogrammetry process:

Taking the Pictures

Valeria and David took an average of 341 pictures for each of the four sets of teeth (a total of 1,365 photographs).

Valeria and David took an average of 341 pictures for each of the four sets of teeth (a total of 1,365 photographs).

I collaborated with David to take clear images from every aspect and dimension. We took a hands-on approach, testing different angles and lighting settings to look for the best approach to photograph each tooth. I first relied on natural lighting and a plain background. After a couple of runs, however, David pushed the concept to the next level by adding a photography lightbox, which allowed us to shoot higher-quality photographs with bright lighting and without shadows. These photos served as the foundation for subsequent work with the photogrammetry software.

 

 

Meeting with Anubhav

Valeria interviewed Anubhav Preet Kaur before starting the 3D model process.

Valeria interviewed Anubhav Preet Kaur before starting the 3D model process.

I wanted to know more about the scope of the project and what function my contribution might provide. In order to have a better understanding of the scientific process, I interviewed Anubhav, whose important insight provided light on the significance of her research within the larger scientific field. This interaction helped me understand the purpose of the 3D models I was making, especially given the impending pulverization of the teeth via the ESR process. Furthermore, it emphasized the critical need to have an accurate digital 3D model, as well as a physical model, that would endure beyond the impending destruction of the original objects.

Using Photoshop to Create Masks: What is a Mask?

Valeria encountered several challenges when importing masks. However, Beth supported her in her journey, and they overcame those obstacles together.

Valeria encountered several challenges when importing masks. However, Beth supported her in her journey, and they overcame those obstacles together.

Masks play a crucial role in the model-building process in Agisoft Metashape as they provide precise control over the specific portions of an image used for generating the model. This level of control ensures the resulting reconstruction is accurate and detailed by eliminating irrelevant or problematic features. I used Adobe Photoshop to create masks for each set of teeth, and this proved to be one of the most challenging aspects of the entire project. Because the sets of photos had varying angles and lighting conditions, I collaborated with Beth Fischer to troubleshoot and overcome these obstacles. This collaborative effort deepened David’s and my own understanding of the process. This enabled him to document the issues I faced and their corresponding solutions for future students. After approximately one month of persistent trial and error and several meetings with Beth, we successfully identified effective solutions to the encountered problems.

 

Using Metashape to Create the 3D Model

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

When you use Metashape, it starts by scanning each image and looking for specific points that stand out, like a small group of dark pixels in a larger area of light pixels. These distinctive points are called “key points,” and the software only searches for them in the unmasked areas of the image. Once it finds these key points, Metashape starts to match them across multiple images. If it succeeds in finding matches, these points become “tie points.” If enough points are found between two images, the software links those images together. Thus, many tie points are called a “sparse point cloud.” These tie points anchor each image’s spatial orientation to the other images in the dataset—it’s a bit like using trigonometry to connect the images via known points. Since Metashape knows the relative positions of multiple tie points in a given image, it can calculate an image’s precise placement relative to the rest of the object. After that process, I made the model even more accurate by using “gradual selection” to refine the accuracy of the sparse point cloud, and then I “optimized cameras” to remove any uncertain points (yay!). 

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Later on, I moved on to building the “dense cloud.” This process utilizes the position of the photos previously captured to build a refined sparse cloud. Metashape builds the dense cloud by generating new points that represent the contours of the object. The resultant dense point cloud is a representation of the object made up of millions of tiny colored dots, resembling the object itself. I then cleaned the dense cloud to further refine it by removing any noise or uncertain points.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Now it was time to build the geometry! This is what turns the point cloud into a solid, printable surface. Through this process, Metashape connects the dots by forming triangular polygons called “faces.” The more faces the model has, the more detailed it will be (it also uses more memory!). As a point of comparison, early 3D animations often appeared to be blocky objects with visible facets, and that was because those models had low face counts. High face counts offer greater refinement and realism.

Lastly, I textured the model. Metashape uses dense cloud points to identify the color of each spot on the model. Texturing the model offers further realism because it applies the actual colors of the object (as photographed) to the resultant 3D model. 

And that’s the general process I followed to turn a set of images into a high-quality 3D object using Metashape!

Printing the Model

We used calipers and recorded those measurements for later use with accurately scaling the digital object.

We used calipers and recorded those measurements for later use with accurately scaling the digital object.

To print the final 3D model of the set of teeth, Beth and David worked on scaling it in Metashape. Earlier in the project, David had measured each set of teeth with calipers and recorded metric measurements. Then, Beth marked the endpoints of two sets of David’s measurements and set the length between them. Based on those known measurements, Metashape was then able to figure out the proportionate size of the rest of the model to within 0.1 mm.

 

Valeria and David began printing a rough draft of how the models will look once the materials are set. 

Valeria and David began printing a rough draft of how the models will look once the materials are set. 

Valeria and David completed printing a rough draft to verify that the size is accurate.

Valeria and David completed printing a rough draft to verify that the size is accurate.

Next Steps

The final steps, which are scheduled to take place this summer, will be to:

  • Clean up the file structure of the four digital projects in preparation for permanent archiving in the college library;
  • Send the final digital files to Anubhav Preet Kaur in India; we will include .stl files so that she may 3D print her models locally.

Post Script (Feb 23, 2024)

We have completed and shared all four photogrammetry projects with Anubhav Preet Kaur. Each project includes the following:

  • All original photos
  • Final Metashape digital 3D photogrammetry objects, including texturing
  • A .stl and .3mf file, each of which can be used to 3D print the digital object
  • Each project also includes a README text file that offers an overview of the project

We hope to add these 3D objects to this post later this year as rotatable, zoomable objects that can be viewed from all angles.

Sources

  1. Chauhan, Parth. (2022). Chrono-contextual issues at open-air Pleistocene vertebrate fossil sites of central and peninsular India and implications for Indian paleoanthropology. Geological Society, London, Special Publications. 515. 10.1144/SP515-2021-29. https://www.researchgate.net/publication/362424930_Chrono-contextual_issues_at_open-air_Pleistocene_vertebrate_fossil_sites_of_central_and_peninsular_India_and_implications_for_Indian_paleoanthropology
  2. Estes, R. (2023, June 8). bovid. Encyclopedia Britannica. https://www.britannica.com/animal/bovid
  3. Grun, R., Shackleton, N. J., & Deacon, H. J. (n.d.). Electron-spin-resonance dating of tooth enamel from Klasies River mouth … The University of Chicago Press Journals. https://www.journals.uchicago.edu/doi/abs/10.1086/203866 
  4. Lopez, V., & Kaur, A. P. (2023, February 11). Interview with Anubhav. personal. 
  5. Wikimedia Foundation. (2023, June 1). Geologic time scale. Wikipedia. https://en.wikipedia.org/wiki/Geologic_time_scale#Table_of_geologic_time 
  6. Williams College. (n.d.). Anne Skinner. Williams College Chemistry. https://chemistry.williams.edu/profile/askinner/ 
  7. Agisoft. (2022, November 4). Working with masks : Helpdesk Portal. Helpdesk Portal. Retrieved June 16, 2023, from https://agisoft.freshdesk.com/support/solutions/articles/31000153479-working-with-masks
  8. Hominin | Definition, Characteristics, & Family Tree | Britannica. (2023, June 9). Encyclopedia Britannica. Retrieved June 16, 2023, from https://www.britannica.com/topic/hominin

Spinning Tales: My Whimsical Adventure in Arduino Turntable Wonderland

Arduino turntable prototype (close up of gear)

Arduino turntable prototype (close up of gear)

I remember the day I first laid eyes on that clunky, awkward, yet fascinating automated burrito-making machine in the local toy store. It was love at first sight! I knew I had to make it mine, but alas, my piggy bank held only a handful of nickels and a couple of lint balls. Little did I know that my passion for robotics would lead me to a journey full of laughter, tears, and making the lives of hundreds of passionate photogrammetry hobbyists like me easier by creating an affordable DIY Arduino turntable.

Fast forward to 2023, where I found myself rotating an 80 thousand-year-old cave bear tooth by one degree increments and taking 600 pictures, all with just 2 hands (which took me 4 hours and gave me 2 days of back pain) in our college Makerspace. I found myself daydreaming about the kind of robot I would create if only I had the skills of Tony Stark. And then, soon afterward, while I was surfing the internet on how to make photogrammetry pictures better optimized for 3D scanning, I stumbled upon a YouTube photogrammetry tutorial and found out that there was a ”thing” called “turntables.” To my sadness, it cost $150. And that was my light-bulb moment. I thought, “Why not give it a try?” As I saw my Makerspace friends clumsily rotate a plastic hangman for 3D scanning, I had an epiphany – what if I built an AFFORDABLE automatic turntable to do the job for us?

Arduino turntable prototype (base, rotator, gear, spindle)

Arduino turntable prototype (base, rotator, gear, spindle)

With the enthusiasm of a mad scientist, I proposed the idea to David, our Makerspace Program Manager and he immediately approved the idea and sent me a couple of resources to start with (thanks, David, for being so supportive). I dove headfirst into the world of turntables that people had previously made. I found Adrian Glasser–a professional computer scientist and a consultant–who had already made an almost similar prototype I was planning to make. Although Adrian’s project was pretty cool, it needed fancy components which were relatively expensive. I also found Brian Brocken, a passionate maker and 3D printer, whose turntable project stood out and inspired me a lot in the design of my prototype. While these works were a great sense of inspiration, my mind was lingering around the question of “how to make the design and features more efficient while keeping the device affordable and easy to build.”

The journey was fraught with challenges and unexpected twists, but I was determined to build the most magnificent, borderline-overengineered turntable the world had ever seen (just kidding!). I worked iteratively, and my first draft was a very basic model so that I could feel it with my hands and think about the build process I 3D printed a PLA (a type of 3D printing filament) base, a rotating platform , and some gears and bearings. After researching different approaches, I ordered my first set of electronic components and kept the total cost below $60 for this first version.

Arduino circuit board and LCD screen

Arduino circuit board and LCD screen

I decided to go with Arduino Uno, a very easy-to-program and flexible microcontroller that will be  the brains of my device. “Easy to build for everyone” was lingering in my mind when I chose the components. I got a stepper motor – which provides incremental motion, compared to a DC motor that provides a continuous motion – coupled with a physical motor driver to enable precise and sequential one-degree rotations with a super-low margin of error. To make the turntable more user-friendly, I added a simple LCD display and a rotary encoder for adjusting the rotation speed. After two weeks of assembly and testing, I had a fully functional circuit. 

Now it’s time to code! The hardest part while coding was finding the library file on the internet that corresponded to my particular stepper motor. It took me 4 hours just to find the library and start coding! Phew…

I kept writing code for a week and then moved on to testing my code. Overcoming the challenges of building my robotic turntable was like conquering Mount Everest. I spent hours troubleshooting the Arduino code, sifting through lines of syntax until my eyes crossed. But, much like a robot phoenix, I rose from the ashes, armed with patience, persistence, and an endless supply of coffee. After a few weeks of tinkering and testing, I finally had a circuit and a working code that I marked as a BIG CHECKPOINT for the project.

The spring semester gradually came to an end, and the turntable project will take a summer vacation. But next semester, the first prototype of the turntable is going to see the bright light of the earth. 

Next Steps

  1. Using Fusion360 to design an easy-to-print downloadable 3D model (stl file) 
  2. Using Infra-Red (IR) sensors to automate the camera shutter click with each one-degree rotation of the turntable, so that our Makerspace friends can leave the automated turntable working (extra hours!) overnight **insert cruel laugh**
  3. Sharing the technical details and building process online to make it accessible to other Makerspace groups and hobbyists around the world. This can be done through posting a follow-up blog with all the technical details. For example, I hope to publish step-by-step instructions, along with the final list of parts (with URLs), my custom Arduino code, link to the software library that corresponds to my stepper motor, and post downloadable .stl files for printing my custom 3D models to complete this project.

Affordability

I hope to keep the project affordable and my goal is for all costs to be under $70.

Conclusion

During this journey, I learned the importance of patience, collaboration, and perseverance. Building a robotic turntable from scratch is not a one-person job, and I found myself relying on the support and expertise of my fellow Makerspace friends. Together, we shared our knowledge and skills, which not only allowed me to build a better turntable but also contributed to the overall growth and development of our Makerspace community. I enlisted the help of my fellow Makerspace comrades, who offered their own unique brand of wisdom, ranging from programming tips to advice on how to make the turntable levitate. (Note: do not try to make your turntable levitate. It’s a bad idea.)

The Arduino turntable project wasn’t just about creating a cool gadget – it was about embracing my love for robotics and the creative process. In the end, I learned that a healthy dose of humor, imagination, and the willingness to make things up as you go can lead to some truly spectacular results.

Today, my beloved half-constructed Arduino turntable takes pride of place on the little yellow Makerspace table, a constant reminder of progress, the power of imagination, and the beautiful chaos that comes with it. So, dear reader, I encourage you to explore your own interests, whether that’s robotics or any other field that sparks your curiosity. Be open to surprises, maintain a sense of humor when facing challenges, and always remember that amazing innovations often start with bold ideas.