Lost but Found in the Photogrammetry World

The Quandary:

Have you ever broken or lost a small part of an important object you value? Perhaps the strap of that beautiful watch you got from your grandma or the battery cover for the back of your remote control? You looked for it everywhere, but the part was too “insignificant” to be sold on its own. Or it just wasn’t the sort of thing that anyone would expect to need a replacement.

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

Last semester at Williams College, Chris Koné, Associate Professor of German and Director of Oakley Center for Humanities & Social Sciences, had a similar experience. He lost an integral part of his desk that allows him to keep his files neatly stored and organized (shown on picture). Desperate to have a place for the files and papers scattered miserably on the floor, Prof. Koné looked in a brick and mortar NYC office parts store, as well as on Amazon, eBay, and other e-commerce websites, but alas, the object was nowhere to be found. It had become obsolete!

The “obsolete plastic object”

The “obsolete plastic object”

Determined to leave no stone unturned in finding a replacement for the obsolete plastic object, Prof. Koné did what any sensible person with access to the Makerspace would do – he asked for a 3D-printed model of the object! And it is here that he met me, an intern working at the Makerspace over the summer. In the process of helping him, I learned about multiple methods of photogrammetry and created a significantly more efficient and streamlined workflow for the Makerspace. 

Some Background

As a new student worker with zero knowledge about photogrammetry and 3D printing, David Keiser-Clark, the Makerspace Program Manager, thought this project would be just the right amount of challenge for me. Photogrammetry is the process of creating a 3-dimensional digital model of an object by taking dozens or hundreds of photos of the object from different angles and processing them with software to create a digital spatial representation of the object. Doing this project would be a good introduction to the 3D digital world while allowing me to get acquainted with the Makerspace.

If you have tried photogrammetry, you know that some of the most difficult objects to work with are those that are dark or shiny. This object was dark and shiny! When an object is dark, it becomes difficult for the software to distinguish one feature on the object from another, resulting in an inaccurate digital representation. Likewise, light is reflected when an object is shiny, resulting in images that lack details in the shiny areas. Thus, you can imagine how challenging it is when your object is both shiny and dark!

Step 1

The first step was to figure out how to reduce the darkness and shininess of the object. To kill both birds with one stone, I covered the object with white baby powder, a cheaper alternative to expensive photogrammetry sprays used in industry. The powder’s white color would help eliminate the object’s darkness and offer it some helpful texture, while its anti-reflective nature would reduce shininess. After several attempts to completely cover the object, this method proved ineffective as the powder would not stick to the object’s smooth surface. A little out-of-the-box thinking led me to cover the object with matte blue paper tape, which proved very effective as the tape’s rough texture allowed minimum light reflection. 

obsolete plastic object coated with blue tape

obsolete plastic object coated with blue tape

A Bit of Photography 

Milton taking pictures for photogrammetry

Milton taking pictures for photogrammetry

Now that the two biggest giants had been slayed, it was time to move on to the next step: taking pictures of the object. Taking shots for photogrammetry is very similar to doing stop-motion animation. You take a picture of the object, move it at a small angle (between 5-15 degrees) by hand or with a turntable (a rotating disc), and take another picture. Then you repeat this process until the object has rotated completely, change the camera angle (e.g., by taking shots from the top of the object), and redo the whole process again. This can be quite tedious, especially if you have to do it by hand, but luckily for me, the Makerspace had recently bought a new automated turntable, so I didn’t have to rotate the object manually. I also got to be the first to create a documentation guide for other Makerspace student workers to more easily be able to utilize the turntable in the future!

Alignment Process

Once the photos were ready, the next step was to analyze them using photogrammetry software. I turned to Agisoft Metashape, a powerful program that receives pictures of an object from different angles and analyzes them to create a 3D depiction of the object. The software first finds common points between the various images, called anchor points, and calculates their relative distances, allowing the software to place them in a 3D place. This process is called alignment.

Unfortunately, despite my efforts to aid the software by covering the object with matte blue tape to reduce its shininess and darkness, the obsolete plastic object did not align properly in Metashape. While I could not pinpoint the exact reason, I suspect it was due to its hollow shape, which made it challenging for the software to capture points on the inner surfaces, especially the corners. It was quite disappointing to get these results, especially after having had to wade through Metashape’s jungle of commands, but that was certainly not the end of it all. I decided to try a different approach – raise an older desktop 3D scanner from the grave!

Misalignment in Metashape

Misalignment in Metashape

The Hewlett Packard (HP) 3D Structured Light Scanner

The 3D David Scanner (now called the HP 3D Structured Light Scanner) works by projecting light onto a subject and capturing the reflection. It measures the time taken for the light to return, determining the distance of each point. These points, represented as XYZ coordinates, are collectively used to digitally reconstruct the object in a 3D space. I intended to use the structured light scanner as an alternative to Metashape software because it allows more control over the alignment process. For example, you can select two specific images you want to align and tell the software how you want them to get aligned. In addition, the scanner features a projector that sheds light on the project you’re scanning, as well as a calibrated background panel, allowing for greater detail to be picked up. 

HP 3D Structured Light Scanner

HP 3D Structured Light Scanner

A Bit of Scanner Surgery

Using the HP 3D Structured Light Scanner

Using the HP 3D Structured Light Scanner

The Makerspace’s HP scanner unfortunately hadn’t been functional in over three years. The camera was not working, and the scanner’s software could not make exports due to licensing issues. I updated the device’s software and installed new camera drivers, and in no time, the scanner was fully functional again. I then scanned the obsolete plastic object with the structured scanner. Unfortunately, the results were unsatisfactory. It resolved the prior alignment issue with Metashape, but the digital model had thin walls and holes on some of its surfaces, making it impossible to print. 

Thin walls and holes in the structured light scanner model

Thin walls and holes in the structured light scanner model

Building from the Ground Up with Fusion 360

Results of different lighting setting in HP 3D Structured Light Scanner

Results of different lighting setting in HP 3D Structured Light Scanner

After trying out different strategies with the HP 3D Structured Light Scanner, such as different light settings, but still not getting good results, David suggested a different method – building the model from scratch! Excited to try out new software (and get a break from the structured scanner!), I began exploring Fusion 360 tutorials and documentation. Autodesk Fusion 360 is a Computer-Aided Design (CAD) software with applications across various sectors, including manufacturing, engineering, and electronics. It allows one to create a simple sketch of a model and build it into a solid model with precise dimensions. You can even add simulations of real-world features such as material sources and lighting. 

Of course, this new, complicated, piece of software came with its challenges. For example, I had to know the dimensions of the fillets (the arcs) inside and outside my object. A little creativity combined with a pair of vernier calipers and a piece of paper did the job. Another challenge was understanding the timeline feature of Fusion 360, one of the most important features of the program, which allows you to record your progress and go back to a certain point. Researching online and getting help from a friend (shoutout to Oscar!) with more experience in Fusion 360 proved helpful in better understanding the software. 

Successful Fusion 360 model of the obsolete plastic object

Successful Fusion 360 model of the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

The Obsolete Plastic Object Was No Longer Obsolete

Finally, after several days of learning Fusion 360 and incrementally building a model, the obsolete plastic object was no longer obsolete. I produced an accurate model of the object and printed several copies, which Professor Koné was more than happy to receive. His files had regained their home, and time spent scouring eBay and Amazon for a nameless object had come to an end!

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

Conclusion

My experience working on photogrammetry and 3D modeling at the Makerspace was certainly full of twists and turns but definitely worth it. I learned how to use more than three very complicated software applications, significantly improved the Makerspace photogrammetry procedure (reduced a 3-month process to 1-2 days), and approached new challenges with an open mind.

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Next Steps

I look forward to exploring other methods of photogrammetry, particularly ones that require less equipment, such as those that use only a smartphone. Reality scan is one promising alternative that can create lower-resolution scans and models in less than 15 minutes. With new technologies coming out every day, there are many avenues to explore, and I’m excited to discover better methods. 

Screenshot: Experimenting with the Reality Scan smartphone app

Screenshot: Experimenting with the Reality Scan smartphone app

Pixels or Petals? Comparing Physical vs. Digital Learning Experiences

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Learning has not been the same since COVID. Just like the vast majority of students around the world, my classes were interrupted by the COVID pandemic back in 2020. After having classes canceled for two weeks, and in an effort to get back on track, my high school decided to go remote and use Google Meet as an alternative to in-person learning. Remote learning did not feel the same — this included using PDF files instead of books for online classes, meeting with peers over video conferencing for group projects, or taking notes on my computer and studying only digital material for exams. I cannot say that I was not learning, because that would not be the best way to describe it, but I can say that something rewired my brain and I have not been able to go back. Due to COVID and other factors, the use of simulations in schools may increasingly supplant hands-on learning and more research needs to be done not only on the implications for content knowledge but also for students’ development of observational skills.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Last week, Williams College students Isabelle Jiménez ‘26 and Harper Treschuk ‘26 visited the Makerspace to start a project for their Psychology class, “PSYC 338: Inquiry, Inventions, and Ideas” taught by Professor Susan L. Engel, Senior Lecturer in Psychology & Senior Faculty Fellow at the Rice Center for Teaching. This class includes an empirical project that challenges students to apply concepts on children’s curiosity and ideas to a developmental psychology study. Isabelle and Harper decided to analyze the ideas of young children following observations with plants, more specifically: flower species. The students plan to compare how two groups of similarly aged children interact with flowers. The first group will interact with real flowers and will be able to touch and play with the plants (Fig. 1), and the second group will interact with 3D models of the plants using electronic devices (iPads) that enable them to rotate and zoom in on the flowers (Fig. 2).  By analyzing the interactions of children with real and simulatory flowers, they hope to extend existing research on hands-on and virtual learning to a younger age range. Valeria Lopez ‘26 was the lead Makerspace student worker who assisted them in creating the necessary models which will be covered in this blog post. 

I was excited to learn about Isabelle’s and Harper’s project and quickly became involved by assisting them in using Polycam 3D, a mobile photogrammetry app. This app enabled us to quickly create three-dimensional digital models of physical flowers. We opted for photogrammetry as our method of choice due to its versatility—it can model almost anything given enough patience and processing power. Photogrammetry involves capturing a series of photos of an object from various angles, which are then processed by software to create a coherent three-dimensional digital model. To meet our project’s tight deadline, we decided to experiment with smartphone apps like RealityScan and Polycam, which offer a user-friendly approach to 3D object creation. While our standard photogrammetry workflow in the Makerspace provides greater precision, it requires more time and training because it uses  equipment such as a DSLR camera, an automated infrared turntable, a lightbox, and Metashape software for post-processing. Despite initial setbacks with RealityScan, we successfully transitioned to Polycam and efficiently generated 3D models. These models serve as educational resources for children, and since precise accuracy wasn’t necessary for this project, using a mobile app proved sufficient. This rapid approach ensures that the 3D models will be ready in time for the educational teach-in Isabelle and Harper are organizing at Pine Cobble School.

Process

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

We began our project by utilizing the photography equipment at the Makerspace in Sawyer Library to capture images of flowers in vases. Initially, we were careful to avoid using the provided clear glass vases because translucent and shiny objects are more difficult for the software to render correctly into accurate models. With the guidance of David Keiser-Clark, our Makerspace Program Manager, we selected a vase that provided a stark contrast to both the background and the flowers, ensuring the software could differentiate between them (Fig. 3 & 4).

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower's movement.

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower’s movement.

Setup

Our setup involved placing the flowers on a turntable inside a lightbox and securing the smartphone, which we used for photography, on a tripod. 

Troubleshooting

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Our initial approach involved seeking out a well-lit area with natural lighting and placing the plant on a table with a contrasting color. However, we soon realized that the traditional method of keeping the phone stationary while rotating the subject wasn’t optimal for smartphone-designed software. While this approach is commonly used in traditional photogrammetry, our mobile app performed better with movement. Recognizing this, we adjusted our strategy to circle the subject in a 360-degree motion, capturing extensive coverage. This resulted in 150 pictures taken for each flower, totaling 450 pictures. Despite initial setbacks with two different photogrammetry apps, our second attempt with Polycam proved successful, allowing for more efficient and accurate processing of the models (see Fig. 5).

Results

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

We did not expect to need to do so much troubleshooting! In all we spent 45 minutes loading and testing three different apps, before settling on one that worked successfully. We are extremely happy with the end results. As a final step, I uploaded our three models to SketchFab to ensure that the children could easily access them across different devices (Fig. 6).

Next Steps

  1. Engage with Isabelle and Harper to gather their general impressions on the kindergarteners and first graders’ interactions with the real and digital 3D models while still maintaining complete confidentiality of the results.
  2. Take the opportunity to delve deeper into mobile photogrammetry tools and document the process thoroughly. Share this documentation with other makerspace student workers and the wider community to facilitate learning and exploration in this area. 
  3. Collaborate with other departments on similar projects that utilize 3D objects to enhance educational experiences, fostering interdisciplinary partnerships and knowledge exchange.

Postscript (May 10, 2024)

Isabelle and Harper report that their educational teach-in at Pine Cobble School using the 3D flowers was a success:

The students were all able to rotate them and zoom in and out. We noticed that as expected students in the virtual condition reported visual observations while students in the physical condition reported tactile observations as well (but no observations about smell) — interestingly, this didn’t affect the number of observations between the conditions. Students were engaged with the materials although for a couple students we wondered if they became enraptured with the iPad rather than the task of observation itself — they were zooming out so far in order to make a flower disappear. Thanks again for your collaboration and support on this class project. We are interested to hear if the Makerspace decides to partner with the folks at the Cal Poly Humboldt Library in the future.

Makerspace Collaborating on Multiple Sustainability Projects

Last spring semester, the Makerspace @ Williams College pivoted to focus on academic projects that support teaching and learning goals; previously, this focus had been an aspirational goal. The Makerspace Program Manager, David Keiser-Clark, and his team of amazing student workers, now support a dozen interdisciplinary academic and campus projects at a time. A quarter of these projects support sustainability, or specifically the Zero Waste Action Plan, including: (1) a three-college collaboration to create an eco-friendly deterrent for Japanese Beetles in our community garden; (2) a prototype to upcycle plastic bottles into 3D printer filament; and (3) a set of laser engraved wood signs, sustainably harvested from Hopkins Forest, for a Stockbridge-Munsee led garden video and audio tour at the Mission House in Stockbridge, MA. Below, you’ll find a brief spotlight on each project, and possible ways we might build on these initial efforts.

E4 Bug Off Team Project : Mitigating Japanese Beetle Damage

E4 Bug Off Team Project, installed in the Williams College Community Garden : Mitigating Japanese Beetle Damage

E4 Bug Off Team Project, installed in the Williams College Community Garden

The E4 Bug Off Team is a collaborative environmental project between engineering students from Harvey Mudd and Pomona Colleges, and students working with the Williams College Makerspace and Zilkha Center. The engineering students researched and developed a prototype that would safely repel Japanese beetles to hopefully stop them from defoliating raspberry bushes in the Williams College Community Garden. The Makerspace used 3D printers to create the parts and subsequently assembled the model. Zilkha Center interns then deployed the model in the gardens. The device is designed to be low-maintenance and only needs the reservoir filled weekly with 100% peppermint essential oil. Japanese beetles, in addition to other bugs and mammals, dislike the smell of the mint family, and the concentrated peppermint essential oil diffuses into the air via permeable wicks that extend from the reservoir tank.

One of five engineering diagrams from the 30-page E4 Bug Off Team Project.

One of five engineering diagrams from the 30-page E4 Bug Off Team Project.

The initial model was installed in the garden in July 2022, at the tail end of the raspberry season, and immediately leaked. This spring (2023), the Makerspace re-printed the reservoir tank with a higher density (50% solid as compared to 15%), tested the model and, after 24 hours, found it to be 100% water-tight. This second model was introduced into the garden with mixed results: the functional model performs as intended, but the impact is difficult to measure without a control plot or method of measuring beetle activity this year. 

In addition to recording measurements of a control plot, additional steps to increase effectiveness could include fabricating additional models to better saturate the air within the berry patch or returning the project to the engineering team for design modifications. The final version would be printed with ASA filament, which is physically stronger and UV/moisture resistant, as compared to PLA or ABS filaments.

To learn more about this project, read this blog post by Makerspace student worker Leah Williams.

Contributors: Harvey Mudd College (Students: Javier Perez, Linna Cubbage, Eli Schwarz, Stephanie Huang; Professors Steven Santana and TJ Tsai), Pomona College (Student: Betsy Ding), Zilkha Center (Students: Martha Carlson, Evan Chester, Sabrina Antrosio; Staff: Tanja Srebotnjak, Mike Evans, Christine Seibert) and Makerspace (Student: Leah Williams; Staff: David Keiser-Clark)

Polyformer: Sustainable 3D Printing at Williams College

While completing a month-long Zero Waste Internship at the Zilkha Center (through the ’68 Career Center’s career exploration Winter Study course), Camily Hidalgo pitched building a machine to convert waste plastic into usable 3D printer filament. The project aligns with the Williams College Zero Waste Action Plan, which is based on the sustainability strategy in the Williams College Strategic Plan. She envisioned this as being a collaborative effort between the Williams College Zilkha Center and the Makerspace. 

After researching several options, she selected the Polyformer because it is an open-source (publicly accessible) project that seeks to create a DIY kit, composed of standard and commonly found parts, able to convert and upcycle plastic bottles (waste) into usable 3D printer filament. This project was launched in May 2022 and has quickly amassed more than 4,000 people who follow and/or contribute to the project (on Discord), while a core group of dedicated volunteers develop the project.

Many of the 78 printed parts that will be assembled into the Polyformer.

Many of the 78 printed parts that will be assembled into the Polyformer.

The intended outcome is to build a machine, based on standardized specifications, that effectively slices a water bottle into a half-inch wide ribbon, and then feeds that ribbon through a heated funnel, called a hot-end, to extrude it as 1.75mm PET filament. Camily seeks to create a working prototype to demonstrate our ability to disrupt our plastic waste stream and upcycle that into usable 3D printer filament. Approximately 40 bottles are required to create a standard 1 kg roll of filament, (enough to print 6 of the aforementioned beetle devices!). This project seeks to raise awareness that we can both reduce the quantity of waste that the college ships offsite while using that waste to create new filament and thereby purchase less of that virgin material from China. Upcycling waste can reduce the environmental impacts associated with the extraction of raw materials and product manufacturing as well as the significant carbon footprint associated with shipping those products to us from the other side of the globe.

Polyformer diagram for building the "Right Arm Drive Unit Subassembly."

Polyformer diagram for building the “Right Arm Drive Unit Subassembly.”

Camily Hidalgo notes that this project is complicated because the design is constantly being improved. Additionally, it requires 3D printing 78 individual parts and then assembling those with a kit of sourced materials that includes a circuit board, LCD screen, a volcano heater block and 0.4 mm hot end, a stepper motor, stainless steel tubing, bearings, neodymium magnets, lots of wires, and lots of metal fasteners.

This project began last spring semester and, as of this summer, all 78 parts have been locally printed. Assembly has begun, and will be completed during the fall semester, followed by actual testing under a science lab exhaust hood to safely capture antimony, a VOC released when PET reaches its melting point. 

To learn more about this project, read this blog post by Makerspace student worker Camily Hidalgo.

Contributors: Zilkha Center (Student: Camily Hidalgo; Staff: Tanja Srebotnjak, Mike Evans, Christine Seibert), Makerspace (Students: Camily Hidalgo, Milton Vento; Staff: David Keiser-Clark), Chemistry (Professors: Chris and Sarah Goh; Staff: Gisela Demant, Jay Racela)

Laser Engraving: Stockbridge-Munsee Garden Video and Audio Tour

Yoheidy Feliz connecting a red maple slab to a slanted locust base, with dowels and wood glue.

Yoheidy Feliz connecting a red maple slab to a slanted locust base, with dowels and wood glue.

The Stockbridge-Munsee Community Historic Preservation Office summer intern, Yoheidy Feliz, reached out to the Zilkha Center for help with creating locally sourced wooden signs for a permanent video and audio tour at the Stockbridge-Munsee Garden in Stockbridge, MA. She received a dozen sugar maple and red maple discs, plus locust wedges, all sustainably harvested from already fallen trees in the Williams College Hopkins Forest. 

Yoheidy approached the Makerspace and, in collaboration with expertise and tools from the Science Shop, learned how to use an industrial laser engraving machine to etch a welcome sign with QR code, as well as multiple audio guide messages, onto sanded wooden discs. She attached these discs to sloped wooden bases (“wedges”) using woodworking dowel joinery, wood glue and a mallet, and then applied a natural, non-toxic preservative coating of Walrus-brand tung oil. 

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

The day after completing all of this work, she installed these at the Mission House garden, and then created these stunning video and audio tours to guide local and remote viewers through the gardens.  

To learn more about this project, please be on the lookout for an upcoming Makerspace guest blog post by Yoheidy Feliz.
Contributors: Stockbridge-Munsee Community Historic Preservation Office (Staff: Bonney Hartley, Historic Preservation Manager; Student: Yoheidy Feliz), Science Shop (Staff: Jason Mativi, Michael Taylor), CES & Zilkha Center (Staff: Drew Jones, Christine Seibert), Makerspace (Staff: David Keiser-Clark)

Cloning the Last of its Kind

Milton Vento ‘26 using photogrammetry to create a 3D object

Milton Vento ‘26 using photogrammetry to create a 3D object

Most recently, Associate Professor of German, Chris Koné, approached the Makerspace with a problem: all but one of the file hanging clips to his beloved office desk had broken. The result: piles of overflowing manila folders surrounding his desk, cramping his office and style. He searched Ebay, Etsy, and Amazon, but was unable to find replacement parts. He even visited a store in NYC that specializes in providing office parts. Alas, the parts were obsolete. So he approached the Makerspace and asked if we might be able to replicate his last remaining viable part.

Milton Vento and Chris Koné hold the original and cloned objects.

Milton Vento and Chris Koné hold the original and cloned objects.

Milton Vento, the Makerspace’s summer student worker, took on the task as his first project, using it as an opportunity to learn photogrammetry, an accessible and low-cost method of taking many photographs of an object from varying angles and then using software to stitch them together into a 3D digital object. He expanded the project by testing four different methods of creating 3D objects using: standard manual DSLR photogrammetry with Metashape software; photogrammetry using a smart turntable that rotates and sends an infrared signal to the DSLR camera, causing it to iteratively release the shutter and then advance the turntable several degrees and then repeat that process; an older DAVID5 object scanner; and the RealityScan app that requires only a smartphone. This exploration resulted in two distinctly more efficient workflows that will become standard use this fall in the Makerspace. 

He also successfully re-created a 3D object of the final remaining desk part, and printed and delivered a half dozen of these parts to Chris. Should any of these ever break, the file can easily be retrieved and re-printed. 
Contributors: German Department (Professor: Chris Koné), Makerspace (Staff: David Keiser-Clark, Student: Milton Vento)

Future Project Ideas

One upcoming and likely collaboration between the Makerspace and the Zilkha Center would be to laser etch additional sustainably-harvested Hopkins Forest wood slices to create signs for the Williams College Community Garden. Additionally, the Zilkha Center, Makerspace and MCLA Physics and Environmental Center may brainstorm the possibility of creating a larger prototype for upcycling plastic into pellets. The pellets could then be used for injection molding, given to local artists for artwork, or sold regionally; this idea was sparked by Smith College’s collaboration with Precious Plastics


You can find this blogpost and other sustainability projects at sustainability.williams.edu.

3D Scanning: Trials and Tribulations

Our scanner setup. The camera and projector are used to take scans, while the dotted board in the upper left is necessary for calibration. The various items under the mount are just there to hold it up; since this image was taken, we’ve mounted the tripod to a more stable wooden board.

Our scanner setup. The camera and projector are used to take scans, while the dotted board in the upper left is necessary for calibration. The various items under the mount are just there to hold it up; since this image was taken, we’ve mounted the tripod to a more stable wooden board.

This semester I’ve spent most of my time trying to get our DAVID5 (bought by HP and since discontinued) 3D Scanner operational. The scanner is a neat tool that takes images of whatever object we put in front of it and stitches them together to create a 3D model, which can then be printed with the 3D printers. In the past, workers at the Makerspace have even used it to scan and print people’s faces! Unfortunately for us, those workers who knew how the scanner functions have since graduated, leaving me to figure it out myself. 

Coming into this project, I was told that the scanner wasn’t operational; nobody had even been able to get it past the calibration phase. Luckily, we had the manual on hand. I walked myself through the process for setup and calibration, and it actually worked! I even managed to take some rough images in order to create a full model.

Unfortunately, the computer we had running the DAVID software turned out to be not powerful enough to actually fuse together that model, and the whole thing crashed. To make matters worse, we couldn’t save any scans due to an issue with the license that kept us stuck on a trial version of the software. Luckily, the first problem was a fairly easy fix. We were able to get a more powerful used computer, and I used it to successfully put together a rough but complete model! 

The license has been a bit more of a struggle, however. It wouldn’t work regardless of where on the computer or USB I saved it to. We didn’t have a proper stand for the scanner components, so I’d been propping it up on random parts I found around the room. The scanner is also too high, so objects on the table aren’t fully in its field of view.

While working on this post, however, I made a great deal of progress on these issues. We now have a working license, a more stable stand, and a strategy to raise objects for easier scans. Unfortunately, these fixes have somehow led to more problems. Now, the camera isn’t working with the scanner software. Thankfully, the HP support team has been very helpful, so hopefully I’ll be able to get everything working and put together some cleaner scans soon.