Simulating Spaces with AR

At age nine, I had a bicycle accident (and yes, for those who know me, I can’t swim, but I can pretty much ride a bike, thank you!). It was not that unusual compared to how you usually fall from a bike: I was going up perhaps faster than what my mom allowed me at the time, and I bumped into a really, really, BIG rock. In great pain, someone nearby picked me up and, crying very much, I said: “I want to go home, give me my tablet.” A very Gen-Z answer from me, and I don’t recommend that readers have such an attachment to their devices. But let’s be honest—would I have been in such a situation at the time if I was peacefully playing the Sims instead of performing dangerous activities (such as bike riding) in real life? Is there a fine line between real and virtual? Can I immerse myself in a virtual environment where I *feel* like I drive without actually driving *insert cool vehicle*?

Augmented Reality (AR) is something I have been interested in learning more about as an internet geek. Although I count stars for a living now (I am an astrophysics major), I am still very much intrigued by the world of AR. Whenever there is a cool apparatus in front of me, I take full advantage of it and try to learn as much as I can about it. That’s why one of my favorite on-campus jobs is at the Williams College Makerspace! It is the place where I get to be a part of a plethora of cool projects, teach myself some stuff, and go and share it with the world (i.e., as of now, the College campus and grand Williamstown community!). Fast forward to my sophomore year of college, Professor Giuseppina Forte, Assistant Professor of Architecture and Environmental Studies, reached out to the Makerspace to create a virtual world using students’ creativity in her class “ENVI 316: Governing Cities by Design: the Built Environment as a Technology of Space”. The course uses multimedia place-based projects to explore and construct equitable built environments. Therefore, tools like Augmented Reality can enhance the students’ perspectives on the spaces they imagine by making them a reality.

This project could not have been possible without the help of the Makerspace Program Manager, David Keiser-Clark. He made sure that there was enough communication between me and Professor Forte so that deadlines for the in-class project completion were met, as well as the Williams College “Big Art Show”. In short, my role was to help students enhance their architectural designs with augmented reality simulations. This process involved quite a few technical and creative challenges, leading to a lot of growth as a Makerspacian, especially having no background in AR before taking part in this project!

Choosing Tools and Techniques

My role in this project was to research current augmented reality softwares, select one, and then teach students in the course how to utilize it. In consultation with Giuseppina and David, we chose Adobe Aero because it’s free, easy to use, and has lots of cool features for augmented reality. Adobe Aero helps us put digital stuff into the real world, which is perfect for our architectural designs in the “ENVI 316: Governing Cities by Design” course. I then set up a project file repository and inserted guides that I created, such as “Interactive Objects and Triggers in Adobe Aero” and “How to Use Adobe Aero”. This documentation is intended to help students and teaching assistants make their own AR simulations during this — and future — semesters. This way, everyone can try out AR tools and learn how to apply them in their projects, making learning both fun and interactive.

AR Simulations: My process

Once we had all the tools set up with Adobe Aero, it was time to actually start creating the AR simulations. I learned a lot by watching YouTube tutorials and reading online blogs. These resources showed me how to add different elements to our projects, like trees in front of buildings or people walking down the street.

Here’s a breakdown of how the process looked for me:

  1. Starting the Project: I would open Adobe Aero and begin a new project by selecting the environment where the AR will be deployed. This could be an image of a street or a model of a building façade.
  2. Adding 3D Elements: Using the tools within Aero, I dragged and dropped 3D models that I previously created in Procreate into the scene. I adjusted their positions to fit naturally in front of the buildings.
  3. Animating the Scene: To bring the scene to life, I added simple animations, like people walking or leaves rustling in the wind—there was also the option to add animals like birds or cats which was lovely. Aero’s user-friendly interface made these tasks intuitive, and videos online like this one were extremely helpful along the way!
  4. Viewing in Real-Time: One of the coolest parts was viewing the augmented reality live through my tablet. I could walk around and see how the digital additions interacted with the physical world in real-time.
  5. Refining the Details: Often, I’d notice things that needed adjustment—maybe a tree was too large, or the animations were not smooth. Going back and tweaking these details was crucial to ensure everything looked just right. Fig. 1, 2 & 3 show an example of a small project I did when I just started.

Fig. 1: “Maker-Space” 3D model viewed through a tablet, positioned in front of Chapin Hall at Williams College.

Fig. 2: Sketch of “Maker-Space” created in Procreate on my tablet.

Fig.3 This is me standing in front of Chapin Hall, where the previous AR model was displaced using my tablet.

Final Presentation & Lessons Learned

For example, in Picture 2 & 3, you can see the side-to-side comparison of real-life vs AR spaces during the Williams College “Big Art Show” in the fall semester 2024. The student who used the AR techniques decided to place plants, trees, people and animals around the main road to make the scene look more lively and realistic. 

Fig. 2: Exhibition at the “Williams College Big Art Show” featuring 3D printed houses and buildings alongside a main road.

Fig. 3: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Reflecting on this project, I’ve picked up a few key lessons. First, jumping into something new like augmented reality showed me that with a bit of curiosity, even concepts that seem hard at first become fun. It also taught me the importance of just trying things out and learning as I go. This project really opened my eyes to how technology can bring classroom concepts to life—in this case, the makerspace!—making learning more engaging. Going forward, I’m taking these lessons with me.

Pixels or Petals? Comparing Physical vs. Digital Learning Experiences

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Learning has not been the same since COVID. Just like the vast majority of students around the world, my classes were interrupted by the COVID pandemic back in 2020. After having classes canceled for two weeks, and in an effort to get back on track, my high school decided to go remote and use Google Meet as an alternative to in-person learning. Remote learning did not feel the same — this included using PDF files instead of books for online classes, meeting with peers over video conferencing for group projects, or taking notes on my computer and studying only digital material for exams. I cannot say that I was not learning, because that would not be the best way to describe it, but I can say that something rewired my brain and I have not been able to go back. Due to COVID and other factors, the use of simulations in schools may increasingly supplant hands-on learning and more research needs to be done not only on the implications for content knowledge but also for students’ development of observational skills.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Last week, Williams College students Isabelle Jiménez ‘26 and Harper Treschuk ‘26 visited the Makerspace to start a project for their Psychology class, “PSYC 338: Inquiry, Inventions, and Ideas” taught by Professor Susan L. Engel, Senior Lecturer in Psychology & Senior Faculty Fellow at the Rice Center for Teaching. This class includes an empirical project that challenges students to apply concepts on children’s curiosity and ideas to a developmental psychology study. Isabelle and Harper decided to analyze the ideas of young children following observations with plants, more specifically: flower species. The students plan to compare how two groups of similarly aged children interact with flowers. The first group will interact with real flowers and will be able to touch and play with the plants (Fig. 1), and the second group will interact with 3D models of the plants using electronic devices (iPads) that enable them to rotate and zoom in on the flowers (Fig. 2).  By analyzing the interactions of children with real and simulatory flowers, they hope to extend existing research on hands-on and virtual learning to a younger age range. Valeria Lopez ‘26 was the lead Makerspace student worker who assisted them in creating the necessary models which will be covered in this blog post. 

I was excited to learn about Isabelle’s and Harper’s project and quickly became involved by assisting them in using Polycam 3D, a mobile photogrammetry app. This app enabled us to quickly create three-dimensional digital models of physical flowers. We opted for photogrammetry as our method of choice due to its versatility—it can model almost anything given enough patience and processing power. Photogrammetry involves capturing a series of photos of an object from various angles, which are then processed by software to create a coherent three-dimensional digital model. To meet our project’s tight deadline, we decided to experiment with smartphone apps like RealityScan and Polycam, which offer a user-friendly approach to 3D object creation. While our standard photogrammetry workflow in the Makerspace provides greater precision, it requires more time and training because it uses  equipment such as a DSLR camera, an automated infrared turntable, a lightbox, and Metashape software for post-processing. Despite initial setbacks with RealityScan, we successfully transitioned to Polycam and efficiently generated 3D models. These models serve as educational resources for children, and since precise accuracy wasn’t necessary for this project, using a mobile app proved sufficient. This rapid approach ensures that the 3D models will be ready in time for the educational teach-in Isabelle and Harper are organizing at Pine Cobble School.

Process

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

We began our project by utilizing the photography equipment at the Makerspace in Sawyer Library to capture images of flowers in vases. Initially, we were careful to avoid using the provided clear glass vases because translucent and shiny objects are more difficult for the software to render correctly into accurate models. With the guidance of David Keiser-Clark, our Makerspace Program Manager, we selected a vase that provided a stark contrast to both the background and the flowers, ensuring the software could differentiate between them (Fig. 3 & 4).

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower's movement.

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower’s movement.

Setup

Our setup involved placing the flowers on a turntable inside a lightbox and securing the smartphone, which we used for photography, on a tripod. 

Troubleshooting

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Our initial approach involved seeking out a well-lit area with natural lighting and placing the plant on a table with a contrasting color. However, we soon realized that the traditional method of keeping the phone stationary while rotating the subject wasn’t optimal for smartphone-designed software. While this approach is commonly used in traditional photogrammetry, our mobile app performed better with movement. Recognizing this, we adjusted our strategy to circle the subject in a 360-degree motion, capturing extensive coverage. This resulted in 150 pictures taken for each flower, totaling 450 pictures. Despite initial setbacks with two different photogrammetry apps, our second attempt with Polycam proved successful, allowing for more efficient and accurate processing of the models (see Fig. 5).

Results

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

We did not expect to need to do so much troubleshooting! In all we spent 45 minutes loading and testing three different apps, before settling on one that worked successfully. We are extremely happy with the end results. As a final step, I uploaded our three models to SketchFab to ensure that the children could easily access them across different devices (Fig. 6).

Next Steps

  1. Engage with Isabelle and Harper to gather their general impressions on the kindergarteners and first graders’ interactions with the real and digital 3D models while still maintaining complete confidentiality of the results.
  2. Take the opportunity to delve deeper into mobile photogrammetry tools and document the process thoroughly. Share this documentation with other makerspace student workers and the wider community to facilitate learning and exploration in this area. 
  3. Collaborate with other departments on similar projects that utilize 3D objects to enhance educational experiences, fostering interdisciplinary partnerships and knowledge exchange.

From Teeth to Time: Discovering Siwalik Hills’ Past Through Archaeology

How did we get here? Where do we come from? What does our future encompass? As an aspiring scientist, I have always been fascinated by these (and many more!) questions about the evolution of humanity and the cosmos. Specifically, the modern ways in which experts around the world are working towards finding a unifying, concrete answer about the theory of evolution and dispersal of early humans. To my pleasant surprise, scientists at Williams College are making wonderful discoveries and progress on this topic, and I was able to contribute — even just a tiny bit — to some of their work this semester!

Some Background

Anubhav Preet Kaur

Anubhav Preet Kaur pictured working at the ESR Lab at Williams College

Scientists believe that early humans dispersed throughout the world because of changing global climates. The specific routes that these early humans took are still inconclusive. However, there are several hypotheses about the possible areas they inhabited, given early Pleistocene evidence of hominin occupation in those areas. Thus, the hypothesis I will explore in this blog post will be related to the pieces of evidence of hominin occupation from regions around the Indian subcontinent: i.e., Dmanisi, Nihewan, and Ubeidiya—just to name a few sites.

One of the supporters of this hypothesis is Anubhav Preet Kaur, an archeologist conducting a paleoanthropological research project that seeks to identify if the Siwalik Hills in India were a likely dispersal path for early humans. As Anubhav states: “The fossils of Homo erectus, one of the first known early human species to disperse outside of Africa, have been discovered from Early Pleistocene deposits of East Europe, West Asia, and Southeast Asia, thereby placing Indian Subcontinent in general—and the Siwalik Hills, in particular—-as an important dispersal route.” The problem is that no fossil hominin remains or evidence attributed to any early hominin occupation have ever been uncovered in that area. Thus, her project seeks to paint a clearer prehistorical picture of the region’s ecology by precisely dating faunal remains from her dig sites. She hopes to indicate if the Siwalik Hills, already famous for yielding many paleontological and archeological finds over the past hundred-plus years, would have had fauna and ecological conditions during these migratory time periods that would have supported early humans. And precisely dating these faunal remains requires the skills of Dr. Anne Skinner, a renowned lecturer at Williams College. 

Anne is a distinguished Williams College emerita chemistry faculty member who is an expert in electron spin resonance (ESR) and specializes in applying ESR techniques to study geological and archaeological materials. Anubhav is a Smithsonian Institute Predoctoral Fellow and presently a doctoral student at the Indian Institute of Science Education and Research in Mohali, India. Anubhav spent three seasons, between 2020-2022, doing paleontological field surveys and geological excavations at the Siwalik Hills region in India. She led a team of undergraduate and graduate field assistants and volunteers in searching for clues that might indicate if the conditions were suitable for hominins. Ultimately, she brought a selection of her fossils to Williamstown, MA, so that Anne could begin to teach her the process of utilizing ESR to date her objects. 

What is ESR?

ESR is a technique used on non-hominin remains that allow scientists to measure the amount of radiation damage a buried object—in this case, several partial sets of animal teeth—has received to provide insights into its geological and biological history. The Siwalik Hills region is a particularly important site for archaeologists because they are home to a variety of rich deposits of fossil remains that date back from the Miocene to Pleistocene periods; however, Anubhav’s sites in particular, contain remains from the Pliocene and Pleistocene. The periods that she studied in her site are relevant as those are the periods in which she theorizes a dispersal could have happened, making the study of the remains more effective. The region is located in the northern part of India (near the border of Pakistan) and covers an area of about 2,400 square kilometers. The fossils Anubhav and her team collected (~0.63-2.58 Myr) include the remains of Pleistocene mammals, such as bovids, porcupines, deer, and elephants. Thus, they and have been used as a tool for archaeologists to learn more about the region’s past climate and ecology

The Story Starts Here

On January 9, 2023, Anne and Anubhav visited the Williams College Makerspace and inquired if we could create high-quality 3D models that would persist as a permanent scientific record for four sets of Pleistocene mammalian teeth that would soon be destroyed as a consequence of ESR dating. Electron spin resonance is currently the most highly specific form of dating for objects up to 2 Mya, and is used only with animal remains as the dating process requires crushing the material into powder in order to analyze with highly sensitive equipment. Hominin remains are widely considered too rare and valuable to allow destructive dating, while animal remains are relatively more frequent. Creating high-quality 3D objects allows researchers with a means to effectively consult and do further research on a digital reconstruction of the model at a future date. In addition, the 3D objects are the basis for creating 3D prints of the object for physical study and handling. 

Furthermore, ESR is a rare and expensive technique that is only available at a limited number of sites throughout Australia, Japan, Brazil, Spain, France, and the United States. Williams College is, in fact, the only facility in all of North America with ESR equipment, and Anne is the only ESR specialist at Williams. 

My Job

This spring, I collaborated on this 3D modeling project with David Keiser-Clark, the Makerspace Program Manager. We divided the job so that each of us was in charge of producing two unique 3D models of the highest quality. We began the project by holding a kickoff meeting with Anubhav and Anne to discuss project needs and to receive four sets of prehistoric teeth. Throughout the project, we held additional meetings to discuss progress and, finally, to present finished 3D digital and printed models. Despite the fact that this was my first photogrammetry assignment, I embraced the challenge head-on, working autonomously and engaging with stakeholders whenever necessary.

To build the 3D models, I used a photographic method known as photogrammetry. This required putting together many orbits of images using software to create a three-dimensional object. I participated in two workshops offered by Beth Fischer, Assistant Curator of Digital Learning and Research at the Williams College Museum of Art, to develop knowledge of this procedure. Her thorough understanding of the intricate workings of our photogrammetry software, Agisoft Metashape, was incredibly helpful. Beth was a great resource and was willing to meet with us numerous times. Moreover, I shared what I learned with David (and the entire Makerspace team) so that we could update the Makerspace’s new documentation on photogrammetry. By sharing my experiences, I helped to guarantee that the documentation addressed a wide range of challenging edge-case scenarios and would serve as a thorough and useful reference for future student workers.

Here is a walkthrough of the photogrammetry process:

Taking the Pictures

Valeria and David took an average of 341 pictures for each of the four sets of teeth (a total of 1,365 photographs).

Valeria and David took an average of 341 pictures for each of the four sets of teeth (a total of 1,365 photographs).

I collaborated with David to take clear images from every aspect and dimension. We took a hands-on approach, testing different angles and lighting settings to look for the best approach to photograph each tooth. I first relied on natural lighting and a plain background. After a couple of runs, however, David pushed the concept to the next level by adding a photography lightbox, which allowed us to shoot higher-quality photographs with bright lighting and without shadows. These photos served as the foundation for subsequent work with the photogrammetry software.

 

 

Meeting with Anubhav

Valeria interviewed Anubhav Preet Kaur before starting the 3D model process.

Valeria interviewed Anubhav Preet Kaur before starting the 3D model process.

I wanted to know more about the scope of the project and what function my contribution might provide. In order to have a better understanding of the scientific process, I interviewed Anubhav, whose important insight provided light on the significance of her research within the larger scientific field. This interaction helped me understand the purpose of the 3D models I was making, especially given the impending pulverization of the teeth via the ESR process. Furthermore, it emphasized the critical need to have an accurate digital 3D model, as well as a physical model, that would endure beyond the impending destruction of the original objects.

Using Photoshop to Create Masks: What is a Mask?

Valeria encountered several challenges when importing masks. However, Beth supported her in her journey, and they overcame those obstacles together.

Valeria encountered several challenges when importing masks. However, Beth supported her in her journey, and they overcame those obstacles together.

Masks play a crucial role in the model-building process in Agisoft Metashape as they provide precise control over the specific portions of an image used for generating the model. This level of control ensures the resulting reconstruction is accurate and detailed by eliminating irrelevant or problematic features. I used Adobe Photoshop to create masks for each set of teeth, and this proved to be one of the most challenging aspects of the entire project. Because the sets of photos had varying angles and lighting conditions, I collaborated with Beth Fischer to troubleshoot and overcome these obstacles. This collaborative effort deepened David’s and my own understanding of the process. This enabled him to document the issues I faced and their corresponding solutions for future students. After approximately one month of persistent trial and error and several meetings with Beth, we successfully identified effective solutions to the encountered problems.

 

Using Metashape to Create the 3D Model

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

When you use Metashape, it starts by scanning each image and looking for specific points that stand out, like a small group of dark pixels in a larger area of light pixels. These distinctive points are called “key points,” and the software only searches for them in the unmasked areas of the image. Once it finds these key points, Metashape starts to match them across multiple images. If it succeeds in finding matches, these points become “tie points.” If enough points are found between two images, the software links those images together. Thus, many tie points are called a “sparse point cloud.” These tie points anchor each image’s spatial orientation to the other images in the dataset—it’s a bit like using trigonometry to connect the images via known points. Since Metashape knows the relative positions of multiple tie points in a given image, it can calculate an image’s precise placement relative to the rest of the object. After that process, I made the model even more accurate by using “gradual selection” to refine the accuracy of the sparse point cloud, and then I “optimized cameras” to remove any uncertain points (yay!). 

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Later on, I moved on to building the “dense cloud.” This process utilizes the position of the photos previously captured to build a refined sparse cloud. Metashape builds the dense cloud by generating new points that represent the contours of the object. The resultant dense point cloud is a representation of the object made up of millions of tiny colored dots, resembling the object itself. I then cleaned the dense cloud to further refine it by removing any noise or uncertain points.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Now it was time to build the geometry! This is what turns the point cloud into a solid, printable surface. Through this process, Metashape connects the dots by forming triangular polygons called “faces.” The more faces the model has, the more detailed it will be (it also uses more memory!). As a point of comparison, early 3D animations often appeared to be blocky objects with visible facets, and that was because those models had low face counts. High face counts offer greater refinement and realism.

Lastly, I textured the model. Metashape uses dense cloud points to identify the color of each spot on the model. Texturing the model offers further realism because it applies the actual colors of the object (as photographed) to the resultant 3D model. 

And that’s the general process I followed to turn a set of images into a high-quality 3D object using Metashape!

Printing the Model

We used calipers and recorded those measurements for later use with accurately scaling the digital object.

We used calipers and recorded those measurements for later use with accurately scaling the digital object.

To print the final 3D model of the set of teeth, Beth and David worked on scaling it in Metashape. Earlier in the project, David had measured each set of teeth with calipers and recorded metric measurements. Then, Beth marked the endpoints of two sets of David’s measurements and set the length between them. Based on those known measurements, Metashape was then able to figure out the proportionate size of the rest of the model to within 0.1 mm.

 

Valeria and David began printing a rough draft of how the models will look once the materials are set. 

Valeria and David began printing a rough draft of how the models will look once the materials are set. 

Valeria and David completed printing a rough draft to verify that the size is accurate.

Valeria and David completed printing a rough draft to verify that the size is accurate.

Next Steps

The final steps, which are scheduled to take place this summer, will be to:

  • Clean up the file structure of the four digital projects in preparation for permanent archiving in the college library;
  • Send the final digital files to Anubhav Preet Kaur in India; we will include .stl files so that she may 3D print her models locally.

Post Script (Feb 23, 2024)

We have completed and shared all four photogrammetry projects with Anubhav Preet Kaur. Each project includes the following:

  • All original photos
  • Final Metashape digital 3D photogrammetry objects, including texturing
  • A .stl and .3mf file, each of which can be used to 3D print the digital object
  • Each project also includes a README text file that offers an overview of the project

We hope to add these 3D objects to this post later this year as rotatable, zoomable objects that can be viewed from all angles.

Sources

  1. Chauhan, Parth. (2022). Chrono-contextual issues at open-air Pleistocene vertebrate fossil sites of central and peninsular India and implications for Indian paleoanthropology. Geological Society, London, Special Publications. 515. 10.1144/SP515-2021-29. https://www.researchgate.net/publication/362424930_Chrono-contextual_issues_at_open-air_Pleistocene_vertebrate_fossil_sites_of_central_and_peninsular_India_and_implications_for_Indian_paleoanthropology
  2. Estes, R. (2023, June 8). bovid. Encyclopedia Britannica. https://www.britannica.com/animal/bovid
  3. Grun, R., Shackleton, N. J., & Deacon, H. J. (n.d.). Electron-spin-resonance dating of tooth enamel from Klasies River mouth … The University of Chicago Press Journals. https://www.journals.uchicago.edu/doi/abs/10.1086/203866 
  4. Lopez, V., & Kaur, A. P. (2023, February 11). Interview with Anubhav. personal. 
  5. Wikimedia Foundation. (2023, June 1). Geologic time scale. Wikipedia. https://en.wikipedia.org/wiki/Geologic_time_scale#Table_of_geologic_time 
  6. Williams College. (n.d.). Anne Skinner. Williams College Chemistry. https://chemistry.williams.edu/profile/askinner/ 
  7. Agisoft. (2022, November 4). Working with masks : Helpdesk Portal. Helpdesk Portal. Retrieved June 16, 2023, from https://agisoft.freshdesk.com/support/solutions/articles/31000153479-working-with-masks
  8. Hominin | Definition, Characteristics, & Family Tree | Britannica. (2023, June 9). Encyclopedia Britannica. Retrieved June 16, 2023, from https://www.britannica.com/topic/hominin

The Fine Art of Unclogging

Picture this: You have a hard time deciding what you want to print at The Williams Makerspace, you talk to your friends to brainstorm the best possible artifact, and just when you finally decide to print your so-awaited masterpiece, you find out that the 3d printer is broken. This not-so-uncommon outcome can be disappointing. Though, as a student worker at The Williams Makerspace, I can tell you that this is totally normal! One of the reasons for this happening might be that the 3D printer is clogged. In this blog post, I will talk about my experience unclogging a Dremel DigiLab 3D45 for the first time.

First things first — purging the filament! At this stage, we don’t know what might be causing the clogging, so purging the filament is a safe start. To do so, we cut the filament and press the “Purge” button on the preheating option section. Once clicked, the Dremel should start purging all the filament out, cleaning the inside.

Figure 1. Purging the filament

Unfortunately, purging the filament didn’t fix the issue in this case, so I had to go a little further! To ensure there were no clogs in the stepper motor, I had to turn the Dremel off and allow the extruder and print bed to cool to at least 60°C. Then, I removed the right screw on the bottom of the housing using a T10 Torx bit. From there, I removed the two screws on top of the extruder housing using a 2.5mm hex bit. At this point, I removed the top cover and unplugged the filament runout switch to disconnect the extruder terminal box. I unscrewed – but not entirely – the two motor screws using a 2.5mm Allen key. This allowed me to remove the extruder stepper motor assembly. Taking a clean brush, I gently cleaned the motor as carefully as possible and then put everything back in place. And just like that (drum roll, please), the Dremel DigiLab 3D45 was unclogged!

Figure 2. Taking the cover off

I know this might sound like a lot at first — because it is! But as you get used to working with 3D printers, you will encounter this and many other problems on your way — so beware! For me, one of the best parts of working with these kinds of machines is learning how to use them and fix them! So, the next time you walk into the Williams Makerspace, be assured that we will guide you through any questions or concerns about 3D prints or 3D printers. The best part is that if we don’t know the answer at the top of our heads, we will do our best to answer it as soon as we can.