Local ChatGPT: A Board Enclosure for Williams’ Micro AI

Imagine tinkering with a Generative AI. The orthodoxical scenario with Generative AI is that you ask it a question and it gives you an answer. But this time, rather than asking a question, you dictate how it answers the question. Instead of being a mere user, you are the brain behind the AI.

What is generative AI?

A type of AI that quickly generates answers, information, and contents based on the user’s variety of input (Nvidia, 2024). It typically has an interface where users can type their inputs. Generally, these models can have text, images, sounds, animation, 3D models, or other types of data as inputs and outputs. 

At Williams, there exists a local generative AI called EphBot. Unlike the mainstream generative AI (e.g. ChatGPT, Gemini) which connect you to a huge database stored in powerful servers, EphBot is a tiny device that can be held in a real person’s hand. The EphBot offers AI for experimentation and exploration while ensuring complete data privacy because it is local and does not interact with the Internet or other databases.

Now what?

The Office for Information Technology (OIT) at Williams College is developing another micro AI just like EphBot. Mr. Gerol Petruzella is an Academic Technology Consultant in OIT and the project developer of the upcoming micro AI called NanoBot. I asked him about the purpose and significance of the project and he responded with, 

“For students and faculty at Williams to explore and experiment critically with generative AI. I believe passionately that all Williams students should have the opportunity to be more than merely users of generative AI applications.”

The NanoBot project is the 2nd anticipated microAI of Williams College. It is a generative AI like ChatGPT and Gemini. However, instead of being just a user, NanoBot gives you the opportunity to experiment on the AI itself.  

Why is it necessary to create a casing for the microAI?

Before I go deeper into that question, let us scrutinize the story from the start. Gerol is+ using the NVIDIA Jetson Nano Developer Kit to create the NanoBot. It is a small AI computer that allows a user to build practical AI applications, cool AI robots, and more. 

“I reached out to the Makerspace because the Jetson Nano Developer Kit provides a bare board, but no case or enclosure,” said Mr. Petruzella.

He noted that the Jetson Nano Developer Kit, which is the NanoBot itself, lacks a protective enclosure to its main body. This would be bad especially for hardware like this that is intended to be presented and used by a variety of people on loan through the Williams Library.

“Since my goal is to develop units which students and others in the Williams community can check out and use, the device needed a case, to make it sturdy and usable (avoiding both damage to the device and harm to the user!)”

Indeed, a protective cover would make the device itself sturdy and also avoid the risk of harming the people that are going to use it. But from what types of harm would the enclosure offer protection specifically?

Physical Protection

If the NanoBot will be used by the public, we cannot deny the fact that accidental bumps, drops, and other physical impacts that could lead to damage are likely to happen. Not to mention dust, dirt, and other particles that can accumulate on internal components and cause malfunctions.

Thermal Management

The enclosure is designed to have ventilation in order to help dissipate heat generated by the hardware, preventing overheating and ensuring optimal performance. By controlling the internal environment, it can help maintain a stable operating temperature for sensitive components.

Electrical Safety

It may be a small device, but it is still powered by electricity. The enclosure can provide electrical insulation, protecting users from accidental contact with live components and reducing the risk of electric shock. The enclosure would serve as the countermeasure and we know that it is better to have a countermeasure than to have a cure for electric related damages. 

You can read more here about enclosures.

Why not just order one online?

“I couldn’t find any commercially-available case for this model, but I did discover a recipe on Thingiverse, so using the resources of the Williams Makerspace seemed like a great solution,” said Mr. Petruzella.

The main objective of this project was to fabricate a cost-effective enclosure for the Jetson Nano Board. Specifically, this project aimed to create an enclosure that can:

  1. Protect the device from physical impacts
  2. Withstand high thermal activities without melting
  3. Serve as an outer insulation for the device 

Printing with ASA Filament

Filament Type: PolyLite ASA

Specification:

  • Print Temperature: 240 – 260 °C
  • Print Speed: 30 – 50 mm/s
  • Bed Temperature: 75 – 95 °C
  • Fan: OFF

Caution
The fumes emitted by the ASA filament can be potentially dangerous when inhaled. It emits a smelly & intense smoke that comes from Styrene present in this plastic compound (MakeShaper, 2020). This fume can cause health issues such as headaches, irritation, and so much more. It is recommended to use a fume extraction system while printing. We used BOFA fume extractors.

Blueprint of the top enclosure in Prusa Slicer Software.

Blueprint of the top enclosure in Prusa Slicer Software.

Step 1: Acquire the 3D Model of the Enclosure
The 3D model was pre-modeled by Ecoiras in thingiverse. I downloaded and converted it into a file that the Prusa i3 (3D Printer) can read using Prusa Slicer software. You are always welcome to customize your own design.

The Prusa i3 (3D Printer) printing the enclosure.

The Prusa i3 (3D Printer) printing the enclosure.

Step 2: Configure the 3D Printer and Load the Assigned Filament
Then, wait for it to print. It may fail to print sometimes, but it is totally normal for it to fail. Print and print until it succeeds. After it successfully prints the product, slowly scrape it off from the plate from which it was printed.

The NanoBot with its new enclosure.

The NanoBot with its new enclosure.

Step 3: Fit the Finished Product.
This is the finished product. Feel free to change the color if you want. We chose to print this case in ASA filament, instead of the more common PLA filament, because ASA offers a melting temperature that is higher by about 50 degrees Celsius. That means that the heat generated by generative AI computing is less likely to melt the case. 

Community

In this technology-driven time, the generative AI’s performance and popularity continuously rise. It is inevitable as we proceed in times where advancing technology is prominent. NanoBot will empower students and faculty to become active participants rather than passive consumers of generative AI technology.

The NanoBot gives users the ability to transcend the state of being mere users—how do you want to configure AI?

Resources

Nvidia. “What Is Generative AI?” NVIDIA, 2024, https://www.nvidia.com/en-us/glossary/generative-ai/  

MakeShaper (2020). 3D Printing: Understanding more about ASA filament applications. Makeshaper. https://www.makeshaper.com/post/3d-printing-understanding-more-about-asa-filament-applications 

Benchys on Benches and Sailors on Shelves

The objectives of this project were to 1) build a 3D model and print from scratch to accumulate hands-on CAD and prototyping experience for future modeling and printing projects, and 2) build a practical object—in this case, shelves resting on the windowsill in the Makerspace that can contain and display Benchys—3D boat models used for calibrating and benchmarking 3D-printing performance.

First, I measured the width of the windowsill (1.5”) and dimensions of a typical 3D Benchy (2.5” x 1.25” x 2.0”). Using those measurements, I used the outline and sketch features in the Fusion 360 software to create a shelf exactly 1.5” wide that would sit flush on our windowsill.

The dimensions of the compartment needed to be slightly larger than the dimensions of the Benchy to allow for movement. So, I sketched a rectangle surrounding the perimeters of the Benchy with an additional .25” of room for the width and height to allow for “tolerance” in the geometric dimensioning. I used the “mirror” action in Fusion360 to duplicate the compartments, totaling 4×4 or 16 shelves.

(I sketched a blue rectangle with the same length and height as the Benchy: 2.5” x 2.0”. This served as a helpful tolerancing reference.)

(I used the “extrude” feature in Fusion 360 to add a width of 1.5” to the original 2-dimensional sketch, thereby transforming it into a 3D model.)

 

 

 

 

 

 

 

 

 

Upon completing the 3D model and initializing the 3D printing process, I discovered that the model’s width and height exceeded the dimensions of the standard Prusa 1 MK3S bed. To solve this problem, I could have undergone another remodeling process to fit the dimensions or sliced the prototype and printed it in four iterations. Instead, I printed the original prototype on the larger Prusa XL. Looking forward to future projects, I’ll carefully consider the geometric dimensions of my 3D models relative to the volumetric constraints of the 3D printing devices to ensure successful prints.

Special thanks to Stepher Sabio (’28) and David Keiser-Clark, Makerspace Program Manager, for assisting in the 3D printing process!

Alumni Reunion Weekend at the Makerspace

During the sunny and pleasant reunion weekend of June 7th and 8th, the Makerspace was bustling, offering tours and hands-on making experiences to over 200 Williams alums and their families. We prepared a hands-on project that would allow people to use 3D-printed molds to cast Makerspace-themed coasters, sourced from upcycled Amazon cardboard boxes. This fun experience allowed us to share and discuss an environmentally friendly DIY project that people could easily replicate at home. People can even create their own custom molds!

During the alumni reunion weekend, the kids seemed most excited to mix the ingredients, mold the pulp, and finally clamp the coasters. They also got to take home coasters that we had prepared (and dried!) ahead of time.

Alums in the Makerspace on June 7th, 2024

Alums in the Makerspace on June 7th, 2024

Recipe

  • Cardboard boxes (50g)
  • Water (170g)
  • PVA Glue (15g) (we used Titebond II woodworkers glue; Elmer’s white glue works, too)

Tools

Instructions

  • Cut the Amazon boxes into small pieces
  • Add into the blender: 50g of cardboard, 170g of water, and 15g of glue

    The kids were excited to mix the ingredients (cardboard, water, and glue)

    The kids were excited to mix the ingredients (cardboard, water, and glue)

  • Blend until it’s thick and looks like wet clay
  • Assemble the 3D-printed mold: we used and modified this Pulp-it model

    Kids took turns squeezing extra water from the pulp

    Kids took turns squeezing extra water from the pulp

  • Put the pulp in a cheese cloth and squeeze the excess water out
  • Fill the mold with the damp pulp
  • Press the pulp with your hands so that it is dense and evenly distributed in the mold

    And this is how you squeeze the clamps on the mold!

    And this is how you squeeze the clamps on the mold!

  • Attach the lid to the mold
  • Press the mold using a clamp
  • Let it dry for 24 hours
  • Carefully remove it from the mold and gently place it to dry in direct sunlight (or in front of a fan or heater vent) for about 6 hours
  • It should now be 100% dry and solid
  • Nice work!
Fusion 360 software: We ended up iterating and tried inverting the extrusion of our design. Which version do you like better?

Fusion 360 software: We ended up iterating and tried inverting the extrusion of our design. Which version do you like better?

The kids had a blast making the coasters while learning about how upcycling minimizes waste in our environment. This activity demonstrated how individual action, no matter how small, may collectively impact positive change.

A pile of upcycled coasters made by our alumni's children (from scrap Amazon boxes)

A pile of upcycled coasters made by our alumni’s children (from scrap Amazon boxes)

According to the Environmental Protection Agency

Packaging materials account for 28.1 percent of the total municipal solid waste (MSW), amounting to 82.2 million tons of generation in 2018. This amount poses a high environmental risk and requires systemic and individual actions to mitigate the risks.

A pile of Amazon boxes

A pile of Amazon boxes

We were inspired by this Pulp-it project, and then we modified their open-source parts by using Fusion 360 software to add the Makerspace logo onto the coaster. To do this, we added an image of the logo and then extruded (raised) it about 8mm. To minimize waste, we tested our prototype models by printing it at 15% of the actual size. 

Fusion 360 software: Before adding our logo

Fusion 360 software: Before adding our logo

Fusion 360 software: After adding our logo

Fusion 360 software: After adding our logo

 

 

Spinning Tales : Arduino Turntable Step-by-Step Tutorial (Part 2)

Completed Turntable with the control board

Welcome back to my deep dive into the creation of a low-cost DIY Arduino turntable designed for photogrammetry enthusiasts. In this continuation, I will share a detailed, step-by-step breakdown of the build process, highlighting the technical challenges and solutions, while providing comprehensive resources to empower you to replicate this project.

Components

The primary goal was to design a reliable and cost-effective turntable that can be easily assembled by hobbyists. The focus was on using readily available parts and open-source software to keep the project accessible. Below is a detailed component breakdown, including links, for each part needed, for the project:

1. NEMA 17 Stepper Motor
Quantity: 2-3
Why? Chosen for its balance between cost and performance. NEMA 17 offers sufficient torque for precise rotations necessary in photogrammetry without being overly robust for lightweight platform applications. Compared to larger steppers like the NEMA 23, which offers more power but at a higher cost and size, the NEMA 17 is more suited for desktop projects where space and budget are limited.

2. A4988 Stepper Motor Driver
Quantity: 2-3
Why? The A4988 is a reliable and widely used motor driver that offers easy interfacing with Arduino, making it ideal for beginners and intermediate users alike. It supports micro-stepping which is essential for smooth and accurate rotation. Other drivers like the DRV8825 could also be used but typically cost more and require additional adjustments, making the A4988 a more straightforward choice for this project.

3. 608 Bearing 8x22x7
Quantity: 4-6
Why? These standard skateboard bearings are cost-effective and easily available. They are durable and provide smooth rotation with minimal friction, which is crucial for the accuracy of the turntable. Alternative options like specialized robotics bearings offer higher precision but at a significantly higher cost, making them overkill for this application.

4. 12V Adapter with Female Adapter
Quantity: 1
Why? This adapter provides a reliable and stable power source for the project. 12V is typically needed for the stepper motors, and using a dedicated adapter ensures consistent performance. Alternatives like USB power sources do not generally offer sufficient current for larger motors and can lead to performance issues.

5. Male – Male Jumper Wires
Quantity: 1 pack
Why? Essential for making connections between the Arduino, motor driver, and other components. Chosen for their flexibility and ease of use, they can be quickly reconfigured as needed without soldering, making prototyping faster and simpler. Compared to other connectors, these are very cost-effective and work well in a breadboard setup.

6. Breadboard
Quantity: 1
Why? A breadboard is ideal for this type of project because it allows for easy adjustments and experimentation without permanent changes. This medium-sized breadboard was selected for its sufficient size to fit all components while remaining compact, offering a balance between workspace and portability. I do have plans for using a PCB board in future iterations. More details on it later.

7. Arduino Uno R3
Quantity: 1
Why? The Arduino Uno R3 is the standard for many DIY electronics projects due to its robust community support, extensive libraries, and compatibility with a wide range of shields and accessories. It strikes an ideal balance between functionality, price, and user-friendliness, making it preferable over more powerful boards like the Arduino Mega when simplicity and cost are considered.

8. Push Buttons
Quantity: 3

9. 330 Ohm Resistors
Quantity: 4

The control board

STL Files

For each part, I’ve created STL files that you can download and print. The files are designed to be printed with common filament materials like PLA or ABS, which offer a good balance between strength and ease of printing. You can download the .stl files from: https://github.com/tashrique/DIY-Turntable-Makerspace-Resources.

  • Base V2: This is the foundation of the turntable. It holds the stepper motor and the bearings.
  • Rotating Platform V2: This part is mounted on top of the bearings and is directly driven by the stepper motor. It is where the object to be scanned is placed.
  • Bearing Holders: These components are used to hold the 608 bearings in place. Print 3 pieces of these.

3D Printing Instructions

  • Material: PLA, PETG, ABS, or ASA
  • Layer Height: 0.2 mm for a good balance of speed and detail.
  • Infill: 15% is sufficient for structural integrity but can be increased for parts under more stress, like the motor mount and gear set.
  • Supports: All parts should print well without supports.
  • Bed Adhesion: Use a raft or brim if you experience issues with bed adhesion during printing.

Assembly Tips

Once the parts are printed, follow these tips for assembly:

  • Before the final assembly, test fit all parts together. This helps identify any print errors or adjustments needed.
  • If some parts don’t fit perfectly, you may need to sand or trim them slightly.
  • Use appropriate screws and adhesive to secure the parts firmly. This ensures the turntable remains stable during operation.

Completed assembly of the turntable

Assembly Process

Assembly Process for the Non-Electronic Components

Tools and Materials Needed

  • Super Glue (optional, for additional stability)
  • Sand Paper (optional, to make edges smooth)

Step 1: Preparing the Base Plate

  • Start by preparing the base plate, clear the base plate of any excess material from printing.

Step 2: Installing the Motor

  • Align the motor mount with the designated area on the base plate.
  • Slide the motor into the slot
  • Ensure the motor shaft protrudes through the mount to align with the gear system.

Step 3: Setting Up Bearings

Objective: Install the bearings that will support the rotating platform.

  • Position the bearing holders on the base plate as per the design.
  • Insert the 608 bearings into the holders. If the fit is tight, you may gently tap them into place using a rubber mallet. You might also want to use superglue to secure the holders in place.
  • Ensure the bearings spin freely without obstruction.

Step 4: Installing the Rotating Platform and Connecting the motor

  • Carefully align the rotating platform with the top of the bearings.
  • Slide and apply moderate pressure to put the motor shaft in the connector until it is stable and level.
  • Check that it rotates smoothly without catching or excessive play.

Step 5: Final Adjustments and Testing

  • Manually rotate the platform to check for smooth motion and correct gear alignment.
  • Make any necessary adjustments to the tightness of screws or alignment of gears.
  • Optionally, apply a small amount of lubricant to the gears and bearings for smoother operation.

Schematic diagram of the electronic components and pin connections

Electronic Assembly Guide

Tools and Materials Needed

  • Wire Cutters
  • Wire Strippers
  • Soldering Iron (optional, for a more permanent setup)
  • Multimeter (for checking connections)

Step 1: Setting Up the Arduino
Objective: Prepare the Arduino board for connection.

  • Place the Arduino on your workbench or mount it on the base plate.
  • Ensure that it is accessible for connections to both power and other components like the LCD and stepper motor driver.

Step 2: Connecting the Stepper Motor Driver
Objective: Install the A4988 stepper motor driver (Tip: stepper driver documentation).

  • Connect the motor driver to the Arduino using male-to-female jumper wires. Here’s a basic pin connection guide:
  • Connect the DIR (Direction) pin on the driver to a chosen digital pin on the Arduino (e.g., D2).
  • Connect the STEP pin on the driver to another digital pin on the Arduino (e.g., D3).
  • Ensure ENABLE pin is connected if your driver requires it, otherwise it can be left unconnected or tied to ground.
  • Connect the VDD on the A4988 to the Arduino’s 5V output, and GND to one of the Arduino’s ground pins.

Step 3: Wiring the Stepper Motor
Objective: Connect the NEMA 17 stepper motor to the A4988 driver (Tip: NEMA17 documentation).

  • Identify the wire pairs of the stepper motor using a multimeter or by referring to the motor’s datasheet.
  • Connect these wires to the respective A and B terminals on the motor driver. Ensure that the polarity matches the driver’s requirements.
  • Double-check the connections to prevent any potential damage due to incorrect wiring.

Step 4: Adding the LCD Display
Objective: Connect the 16×2 LCD to the Arduino to display status and control messages.

  • Use a breadboard or direct jumper wires to connect the LCD. Typical connections are:
  • RS (register select) to a digital pin (e.g., D4).
  • E (enable) to another digital pin (e.g., D5).
  • D4 to D7 data pins of the LCD to digital pins D6, D7, D8, D9 on the Arduino.
  • Connect the VSS pin of the LCD to the ground and VDD to 5V on the Arduino.
  • Connect a potentiometer to the VO (contrast adjust) pin for contrast control.

Step 5: Power Supply Connection
Objective: Ensure proper power supply connections.

  • Connect the 12V adapter to the VMOT and GND on the stepper motor driver to power the stepper motor.
  • Ensure the Arduino is powered either via USB or an external 9V adapter connected to the VIN pin.

Step 6: Testing and Debugging
Objective: Test the setup to ensure everything is working as expected.

  • Upload a simple test sketch to the Arduino to check motor movements and LCD functionality.
  • Adjust the potentiometer to get a clear display on the LCD.
  • Use the multimeter to troubleshoot any connectivity issues.

Step 7: Final Setup
Objective: Secure all electronic components and clean up the wiring.

  • Use zip ties or cable management clips to organize and secure wires.
  • Ensure all connections are stable and that there’s no risk of loose wires interfering with the moving parts.

Wiring Diagram

LCD Pin Mapping
Reset = 7;
Enable = 8;
D4 = 9;
D5 = 10;
D6 = 11;
D7 = 12;

Stepper Motor Pin Mapping
Step = 6
Direction = 5
(Type of driver: with 2 pins, STEP, DIR)

Programming the Turntable

#include <LiquidCrystal.h>
#include <AccelStepper.h>

void(* resetFunc) (void) = 0;

/*
LCD Pin Map
Reset = 7;
Enable = 8;
D4 = 9;
D5 = 10;
D6 = 11;
D7 = 12;

Stepper PIN Map
Step = 6
Direction = 5
(Type of driver: with 2 pins, STEP, DIR)

*/


AccelStepper stepper(1, 6, 5);

const int rs = 7, en = 8, d4 = 9, d5 = 10, d6 = 11, d7 = 12;
LiquidCrystal lcd(rs, en, d4, d5, d6, d7);

int green = 2;
int red = 3;
int button = 4;
int controls = A1;
int speeds = A0;


String currentStat = "Reset";
String prevStat = "Reset";
int stepsTaken = 0;
bool buttonPressed = false;
bool actionTaken = false;
int buttonClicked = 0;
int currentSpeed = 0;


void setup() {
lcd.begin(16, 2);
pinMode(green, OUTPUT);
pinMode(red, OUTPUT);
pinMode(button, INPUT);

resetControls();
}


void loop() {
runProgram();
}

void runProgram() {
currentSpeed = readSpeed();
currentStat = getStatus();
buttonClicked = buttonClick();

digitalWrite(red, HIGH);

lcd.setCursor(0, 0);
lcd.print(": " + currentStat);

lcd.setCursor(8, 0);
lcd.print("-> " + String(currentSpeed) + "ms");


if (buttonClicked == 1) {
lcd.clear();

//Reset
if (currentStat == "Reset") {
lcd.setCursor(0, 0);
lcd.print("RESETTING...");
stepsTaken = 0;
prevStat = currentStat;
digitalWrite(green, LOW);
digitalWrite(red, HIGH);
resetFunc();
}

//Resume
else if (currentStat == "Start" && prevStat == "Pause") {
lcd.setCursor(0, 1);
lcd.print("RESUMED @" + String(currentSpeed));
prevStat = currentStat;
stepsTaken = commandStart(currentSpeed, stepsTaken);
}


//Start
else if (currentStat == "Start") {
lcd.setCursor(0, 1);
lcd.print("STARTED @" + String(currentSpeed));
prevStat = currentStat;
stepsTaken = commandStart(currentSpeed, 0);
}

else if (currentStat == "Pause" && prevStat == "Pause") {
lcd.setCursor(0, 1);
lcd.print("Already Paused");
}

//Undefined
else {
lcd.setCursor(0, 1);
lcd.print("Invalid Command");
}
}
}


/*--------------------------------------*/

int commandStart(int currentSpeed, int initial) {

lcd.clear();
int steps = 0;

digitalWrite(red, LOW);
digitalWrite(green, HIGH);

for (int i = initial; i <= 200; i++) {
stepper.moveTo(i);
stepper.runToPosition();
lcd.setCursor(0, 1);
lcd.print(i);

lcd.setCursor(4, 1);
lcd.print("/ 200 steps");
steps = i;
delay(currentSpeed);


//Check if any other button is pressed while started
String check = getStatus();
lcd.setCursor(0, 0);
lcd.print(check);

int clicked = buttonClick();
String clickedIndicator = clicked ? "*" : "";
lcd.setCursor(6, 0);
lcd.print(clickedIndicator);

if (clicked) {
if (check == "Reset") {
lcd.clear();
lcd.setCursor(0, 0);
lcd.print("RESETTING...");
delay(200);
stepsTaken = 0;
prevStat = "Reset";

digitalWrite(green, LOW);
digitalWrite(red, HIGH);

resetFunc();
}

else if (check == "Pause") {
lcd.clear();
lcd.setCursor(0, 0);
lcd.print("Paused");
delay(200);
prevStat = "Pause";

digitalWrite(green, HIGH);
digitalWrite(red, HIGH);
return steps;
}
}
}

return steps;
}

/*--------------------------------------*/

int buttonClick()
{
int reading = digitalRead(button);
return reading;
}


void resetControls() {
lcd.clear();
lcd.setCursor(0, 0);
lcd.print("Turntable - Tash!");
digitalWrite(red, HIGH);
digitalWrite(green, HIGH);
delay(500);
digitalWrite(red, LOW);
digitalWrite(green, LOW);
delay(500);
digitalWrite(red, HIGH);
digitalWrite(green, HIGH);
delay(500);
digitalWrite(red, LOW);
digitalWrite(green, LOW);
lcd.clear();
}


String getStatus() {
int controlStatus = analogRead(controls);
int controlRange = map(controlStatus, 0, 1023, 1, 4);
String stat = "";

if (controlRange == 1)
stat = "Reset";

else if (controlRange == 2)
stat = "Pause";

else if (controlRange == 3 || controlRange == 4)
stat = "Start";

else
stat = "-----" ;
delay(100);

return stat;
}


int readSpeed() {
int sensorVal = analogRead(speeds);
int stepSpeed = map(sensorVal, 0, 1023, 250, 5000);
return stepSpeed;
}

The code for the turntable is structured to handle various functionalities: controlling the motor, updating the LCD display, and reading inputs from the rotary encoder. Access the full commented code my GitHub repository: https://github.com/tashrique/DIY-Turntable-Makerspace-Resources

Troubleshooting Common Issues

Motor Noise or Vibration

  • Check alignment of gears and ensure the stepper driver is correctly calibrated.

LCD Display Issues

  • Verify wiring connections and contrast settings; adjust the potentiometer if used or calibrate the voltage divider correctly for clear visibility.

Code Bugs

  • Use serial debugging to monitor outputs and verify that the logic in your sketches matches the intended functions.

Future Enhancements

Integration of IR Sensors

  • Automate the camera shutter operation in sync with the turntable’s rotation to facilitate overnight operations.

PCB Board

  • Integrate all the circuit in a PCB Board

Conclusion

If you have read this far, thank you and good luck! This guide aims to equip you with all the knowledge needed to create and customize your own turntable, fostering further exploration into the fascinating world of DIY electronics. Feel free to share your project progress and reach out with questions or suggestions. Your feedback helps improve and inspire future projects!

 

Sustainable 3D Printing at Williams College (Part 2)

Polyformer Updates:

Polyformer 3D printed parts and electronics ready to be assembled.

Polyformer 3D printed parts and electronics ready to be assembled.

My name is Camily Hidalgo Goncalves, and I am a sophomore at Williams College majoring in Chemistry with a Neuroscience concentration. As a Makerspace student worker, I have recruited Milton Vento ’26, Tashrique Ahmed ’26 (both Computer Science students at Williams College and fellow Makerspace student workers), and Oscar Caino ’27, a student at Swarthmore College who is a prospective Engineering major, to assist me in assembling the Polyformer parts and electronics. We have completed several milestones, and made significant progress on the Polyformer project at Williams College. This innovative project aims to upcycle waste plastic bottles into locally-sourced 3D printer filament.

Assembly and Integration

The assembled Polyformer

The assembled Polyformer

Milton, Oscar and I worked together to assemble the 78 individual 3D-printed parts required for the Polyformer. This intricate process demanded precision and teamwork. Following the assembly of the physical components, I assisted Tashrique with integrating the electronics. This included the installation of a circuit board, LCD screen, volcano heater block, stepper motor, and various sensors and wiring. These components are essential for the Polyformer to function effectively, converting plastic bottles into usable 3D printer filament. 

Collection and Processing of Plastic Bottles

Plastic bottle collection poster.

Plastic bottle collection poster.

In preparation for testing, we collected approximately 75 plastic bottles. These bottles were contributed by the Williams College community, demonstrating a collective effort to reduce plastic waste. Elena Sore ‘27, a prospective Computer Science major and Makerspace student worker, and I worked on the initial step in the processing phase, which involved us cleaning the bottles and cutting them into long, consistent ribbons. These plastic ribbons will then be fed into the Polyformer, where they will be melted and extruded into filament.

Testing and Quality Assurance

Next fall semester we will begin rigorous testing to ensure that the Polyformer operates smoothly and produces high-quality filament that meets the required standards for 3D printing. Several tests will be conducted, including:

  1. Durability Testing: Assessing the strength and flexibility of the produced filament.
  2. Consistency Testing: Ensuring the filament has a uniform diameter, which is crucial for reliable 3D printing.
  3. Compatibility Testing: Verifying that the filament performs well with various 3D printers and printing conditions, while accommodating different material thicknesses from various brands of PET bottles.

Project Goals and Benefits

The Polyformer project aligns with Williams College’s sustainability goals and offers numerous benefits:

  • Waste Reduction: By upcycling plastic bottles, we reduce the amount of plastic waste that ends up in landfills or oceans.
  • Sustainability Education: The project serves as a hands-on educational tool, teaching students about the importance of repurposing and innovative ways to repurpose waste materials.
  • Local Impact: The filament produced will be used to create practical items such as plant pots and compost bins for the Zilkha Center for Environmental Initiatives, supporting local sustainability efforts.

Next Steps

We hope to create a sustainable cycle of converting plastic waste into useful products, while minimizing the environmental impact of plastic disposal. This project provides practical solutions to plastic waste,  and also serves as an educational tool, raising awareness about sustainability and encouraging innovative thinking in environmental conservation.

As we move forward, our next steps will be to refine the process and increase the efficiency of the Polyformer:

  1. Rigorous Testing: Thoroughly test the Polyformer to ensure it produces reliable and high-quality filament that meets 3D printing standards.
  2. Scaling Up: Increase the number of collected bottles and the quantity of filament produced.
  3. Educational Workshops: Host campus workshops to educate the broader community about the Polyformer and the importance of sustainable practices. We might seek to collaborate with the Williamstown Milne Library to host a workshop for local community members.
  4. Research and Development: Continue to improve the design and functionality of the Polyformer based on feedback and test results.

Acknowledgements

Assembling the Polyformer: Oscar Caino ‘27, a Swarthmore College student (left), and Camily Hidalgo Goncalves ‘26, a Williams College student (right).

Assembling the Polyformer: Oscar Caino ‘27, a Swarthmore College student (left), and Camily Hidalgo Goncalves ‘26, a Williams College student (right).

This project would not have been possible without the ongoing support and collaboration received. We are immensely grateful to our collaborators: David Keiser-Clark (Makerspace Program Manager), Milton Vento ‘26, Tashrique Ahmed ‘26 and Elena Sore ‘27 (Makerspace Student Workers), Yvette Belleau (Lead Custodian, Facilities), Christine Seibert (Sustainability Coordinator, Zilkha Center), Mike Evans (Deputy Director, Zilkha Center for Environmental Initiatives), and Oscar Caino ‘27 (Swarthmore College Student). Their expertise, guidance, and contributions have been invaluable to the progress of the Polyformer project.

Stay tuned for more updates as we continue to develop and test the Polyformer. Together, we can make a significant impact in reducing plastic waste and promoting sustainable practices at Williams College.

Reefs Reimagined: 3D Printing the Effects of Tsunamis on Coral

Lauren Mukavitz ‘27: In the Makerspace taking the supports off my finished models

Lauren Mukavitz ‘27: In the Makerspace taking the supports off my finished models

When most people think about coral reef degradation, they often think about bleaching and the effects of climate change. However, coral faces another danger that is hardly talked about—tsunamis. Coral reefs have a unique structure that increases the friction a tsunami encounters on its way to the shore, slowing down the wave and mitigating damage. However, the intense forces during a tsunami can be extremely damaging and can destroy entire reefs. To better understand this impact, I embarked on a project for my class Geologic Hazards with Mike Hudak, Assistant Professor of Geosciences, to model coral before and after a tsunami.

Replicating Tsunami Damaged Coral

First, I created an undamaged model that represented a small colony of coral polyps before a tsunami event. I used Ultimaker Cura to design a 3D model of the coral. Next, I wanted to simulate the damage caused by a tsunami. After struggling to find existing methods for modeling tsunami forces on coral, I teamed up with the David Keiser-Clark, Makerspace Program Manager, Elena Sore, Makerspace Student Worker, and Jason Mativi, Science Shop Instrumentation Engineer, to use SolidWorks, a 3D CAD program. We applied a nonlinear analysis with 0.3 bar (or 3E3 N/m^2) of pressure, the estimated force an average piece of coral experiences during a tsunami, to the undamaged model and let SolidWorks create a “deformed” model for us. It took the software approximately four hours to render these forces to the 3D model.

Left: Original coral model; Right: Same model but deformed using SolidWorks to simulate tsunami forces

Left: Original coral 3D model; Right: Same model but deformed using SolidWorks to simulate tsunami forces

The successful PLA print -- Stonefil was not a fan of my design

The successful PLA print — Stonefil was not a fan of my design

Then I had both models printed at the Makerspace. Initially, we tried using Stonefil PLA, a filament that would approximately mimic coral’s composition with its half PLA (a polyester typically derived from fermented plant starch, such as corn, cassava, sugarcane, or sugar beet pulp) and half ceramic powder. However, the model was too intricate for the material, resulting in a messy and unusable print. We ended up using standard PLA for the final models, which, while less accurate in texture, allowed us to proceed with the physical representation. To simulate sediment damage, I took the “deformed” model to the science shop and used a sandblaster. Unfortunately, the PLA was too strong, and the glass beads in the sandblaster didn’t deform as expected. So, we resorted to breaking the model by hand to represent the kind of physical damage coral might endure during a tsunami.

My models are only approximations of the damage coral sustains during tsunamis. The exact forces on coral polyps during these events are unique and complex, making accurate modeling challenging.

Next Steps

The first step to creating a more accurate model would be refining the methods to determine the necessary forces and coefficients. Then, we could use a 3D CAD program like SolidWorks for a more precise analysis. Additionally, applying post-processing techniques to the 3D printed models, such as using adhesives and texturing materials, could make the PLA models physically look-and-feel more like real coral, enhancing their realism.

Creating more accurate models provides a deeper understanding of the interactions between coral reefs and tsunamis, helping us plan better for these events. This knowledge can guide conservation efforts, inform disaster preparedness strategies, and contribute to the broader field of marine biology. As better models are developed, we move closer to mitigating the devastating impacts of natural disasters on vital ecosystems like coral reefs.

Postscript (August 16, 2024)

See related CNN article: Why this scientist is leaning on surfers, skaters and artists to protect the ocean – “Cliff Kapono is a Native Hawaiian pro surfer and chemist in a race to save the ocean he loves. He co-founded The Mega Lab, a science research group that welcomes anyone (no degree required!) who can help them develop technology and raise awareness about dying coral reefs.”

Postscript (September 9, 2024)

See related CNN article: See the technique that could help save the Great Barrier Reef – “Researchers in Australia are testing a technique called ‘coral seeding’ [that utilizes 3S printers] to help the Great Barrier Reef recover from the effects of climate change.”

The Lincoln Logs: Printing for the WCMA’s Emancipation Exhibition

Introduction: 

WCMA’s “Emancipation: The Unfinished Project of Liberation” exhibit

“Emancipation” exhibit

My most recent Makerspace academic project was assisting Beth Fischer, Assistant Curator of Digital Learning and Research for the Williams College Museum of Art. My task was to 3D print replicas of two sculptures of President Lincoln—Sarah Fisher Ames’ bust of Lincoln and the iconic Abraham Lincoln Life Mask by Clark Mill—as part of the WCMA’s “Emancipation: The Unfinished Project of Liberation” exhibits. These two models complement the work of Hugh Hayden, also present at Emancipation, who incorporates PLA prints into his artistic process. The exhibit emphasizes 3D printing as a relatively accessible medium for creativity and showcases different ways it can assist other styles of art, particularly molds.

Setup 

The two photogrammetry-based 3D models were gorgeous. They defined every ridge, bump, and strand of hair on Lincoln’s head while carrying the texture of the clay, but it was this beauty that posed a challenge. The multidimensional texture in clay is hard to depict using horizontal layers of filament, which is how 3D printers print. Although not a solution, a remedy to this problem was using a hybrid filament – part ceramic and part PLA. Although this filament can’t recreate the vertical complexity of a sculpted model’s texture, it provides a smoother, heavier finish that better resembles the original material. 

We had some leftover StoneFil filament from a previous project, but we knew we would need more to complete both prints. The question was how much more. We did not know how much filament remained on the spools and there was no specific size requested – simply that the two models remain proportional and be as large as possible. 

Naturally, as a math major, I took this as a challenge to maximize the size we could print with only one additional spool of filament. First, I printed two smaller models, noted their xyz scaling, and measured the distance from the nose to the chin. I then used those measurements to find the scale between the height of one and the length of the other. Then, given that scaling, I noted the estimated combined length of the models at a few different sizes and found the factor at which the necessary filament would scale in comparison to the size. In theory, I could approximate the maximum print size given the length of the filament we had left and the spool arriving soon. There was only one problem – we didn’t know how much filament we had. We could weigh the filament, but any statement on the spool-to-filament proportion would’ve been guesswork. 

That was when another Makerspace student worker, Elena Sore, had an idea to create a reference guide for the weight of empty filament spools. We use a variety of brands of filament, and each has a different sized spool. Now, when we finish a spool, we weigh it and enter it into a spreadsheet, allowing us to measure the amount of filament remaining on any given spool by subtracting the spool from the overall weight. 

Printing and Troubleshooting

The final bust with its supports still attached

The final bust with its supports still attached

The time came to print the models. I had decided on the heights 140mm and 93.15, which would give us just enough filament to print both models with enough to spare to be able to still print one more, just in case of failure. I sliced and started the print of the bust and 20 hours later, it came out well. There were a few small holes that indicated mild under-extrusion, but they were not too distracting and the WCMA was interested in showcasing the uniqueness of 3-D prints, so I was perfectly content with the model. 

The second print was not as fortunate. Externally, it looked fine, except the under-extrusion was more visible than the first model. Before removing the model from the plate, I started googling remedies for under-extrusion because I was concerned that I didn’t have enough filament to endure another failure. I recalibrated the printer, increased the nozzle temperature, slightly decreased the printing speed, and ran another mini model with ordinary PLA. It came out perfectly – and that worried me because I was nervous that the problem was with the ceramic filament, which was a requirement for the project. Eventually, I stumbled onto a solution by turning the StoneFil model upside down to examine the supports, and to my shock, I found that they were completely “spaghettified”. The supports had completely failed and were just a mess of tangled filament. I was impressed that the print had managed to build at all. 

The under-extrusion was far more noticeable on the first print of the mask than the bust.

The under-extrusion was far more noticeable on the first print of the mask than the bust.

Exhibition: “Feel free to pick up and touch these reduced-scale 3D prints of Abraham Lincoln!”

Exhibition: “Feel free to pick up and touch these reduced-scale 3D prints of Abraham Lincoln!”

I spent some time in different slicing softwares, trying to optimize the supports. It took (admittedly longer than it should have) to realize that with supports as dense as the model requested, this was a rare case where it would be more filament-efficient and less failure-prone to fill the space underneath the mask with infill, instead of supports. This was the solution we went with, and the bust printed perfectly.

While weighing the options for the final print, David Keiser-Clark, Makerspace Program Manager, and I brainstormed ways of filling in the holes caused by under-extrusion. Our favorite idea, and the only experiment we ran, was using a heat gun to melt a tiny bit of StoneFil filament into the hole and then sand down the excess. It was good in theory, and fun to try, but not entirely effective because it looked like a visible patch. This is because 3D printing filament solidifies incredibly fast after cooling, and we would have needed to either pour a liquid into the hold and/or do a tremendous amount of sanding afterward.

Conclusion

Coincidentally, as the final prints started, I again fell very ill and had to return home for the week and did not get to hand off the pieces. However, I did get the chance to go to the Emancipation exhibit and see the final results. The space itself was a moving experience, and I would strongly encourage anybody to visit or read about the exhibition and its incorporation of 3D printing. This was a fun project to complete during Winter Study, and I got the chance to answer a lot of looming questions about 3D printing during it. I learned a lot about the balance of layer height, print speed, and temperature, I’m excited to see what else we can do with our filament data log, and melting PLA with the heat gun was so much fun that I may try to find a way to make it practical. Although, I must admit, my favorite part of this project is the little Lincoln that found himself a home in my dorm.

An early, miniature prototype that now adorns my desk as a reminder of my work on this WCMA project!

An early, miniature prototype that now adorns my desk as a reminder of my work on this WCMA project!

Lost but Found in the Photogrammetry World

The Quandary:

Have you ever broken or lost a small part of an important object you value? Perhaps the strap of that beautiful watch you got from your grandma or the battery cover for the back of your remote control? You looked for it everywhere, but the part was too “insignificant” to be sold on its own. Or it just wasn’t the sort of thing that anyone would expect to need a replacement.

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

Last semester at Williams College, Chris Koné, Associate Professor of German and Director of Oakley Center for Humanities & Social Sciences, had a similar experience. He lost an integral part of his desk that allows him to keep his files neatly stored and organized (shown on picture). Desperate to have a place for the files and papers scattered miserably on the floor, Prof. Koné looked in a brick and mortar NYC office parts store, as well as on Amazon, eBay, and other e-commerce websites, but alas, the object was nowhere to be found. It had become obsolete!

The “obsolete plastic object”

The “obsolete plastic object”

Determined to leave no stone unturned in finding a replacement for the obsolete plastic object, Prof. Koné did what any sensible person with access to the Makerspace would do – he asked for a 3D-printed model of the object! And it is here that he met me, an intern working at the Makerspace over the summer. In the process of helping him, I learned about multiple methods of photogrammetry and created a significantly more efficient and streamlined workflow for the Makerspace. 

Some Background

As a new student worker with zero knowledge about photogrammetry and 3D printing, David Keiser-Clark, the Makerspace Program Manager, thought this project would be just the right amount of challenge for me. Photogrammetry is the process of creating a 3-dimensional digital model of an object by taking dozens or hundreds of photos of the object from different angles and processing them with software to create a digital spatial representation of the object. Doing this project would be a good introduction to the 3D digital world while allowing me to get acquainted with the Makerspace.

If you have tried photogrammetry, you know that some of the most difficult objects to work with are those that are dark or shiny. This object was dark and shiny! When an object is dark, it becomes difficult for the software to distinguish one feature on the object from another, resulting in an inaccurate digital representation. Likewise, light is reflected when an object is shiny, resulting in images that lack details in the shiny areas. Thus, you can imagine how challenging it is when your object is both shiny and dark!

Step 1

The first step was to figure out how to reduce the darkness and shininess of the object. To kill both birds with one stone, I covered the object with white baby powder, a cheaper alternative to expensive photogrammetry sprays used in industry. The powder’s white color would help eliminate the object’s darkness and offer it some helpful texture, while its anti-reflective nature would reduce shininess. After several attempts to completely cover the object, this method proved ineffective as the powder would not stick to the object’s smooth surface. A little out-of-the-box thinking led me to cover the object with matte blue paper tape, which proved very effective as the tape’s rough texture allowed minimum light reflection. 

obsolete plastic object coated with blue tape

obsolete plastic object coated with blue tape

A Bit of Photography 

Milton taking pictures for photogrammetry

Milton taking pictures for photogrammetry

Now that the two biggest giants had been slayed, it was time to move on to the next step: taking pictures of the object. Taking shots for photogrammetry is very similar to doing stop-motion animation. You take a picture of the object, move it at a small angle (between 5-15 degrees) by hand or with a turntable (a rotating disc), and take another picture. Then you repeat this process until the object has rotated completely, change the camera angle (e.g., by taking shots from the top of the object), and redo the whole process again. This can be quite tedious, especially if you have to do it by hand, but luckily for me, the Makerspace had recently bought a new automated turntable, so I didn’t have to rotate the object manually. I also got to be the first to create a documentation guide for other Makerspace student workers to more easily be able to utilize the turntable in the future!

Alignment Process

Once the photos were ready, the next step was to analyze them using photogrammetry software. I turned to Agisoft Metashape, a powerful program that receives pictures of an object from different angles and analyzes them to create a 3D depiction of the object. The software first finds common points between the various images, called anchor points, and calculates their relative distances, allowing the software to place them in a 3D place. This process is called alignment.

Unfortunately, despite my efforts to aid the software by covering the object with matte blue tape to reduce its shininess and darkness, the obsolete plastic object did not align properly in Metashape. While I could not pinpoint the exact reason, I suspect it was due to its hollow shape, which made it challenging for the software to capture points on the inner surfaces, especially the corners. It was quite disappointing to get these results, especially after having had to wade through Metashape’s jungle of commands, but that was certainly not the end of it all. I decided to try a different approach – raise an older desktop 3D scanner from the grave!

Misalignment in Metashape

Misalignment in Metashape

The Hewlett Packard (HP) 3D Structured Light Scanner

The 3D David Scanner (now called the HP 3D Structured Light Scanner) works by projecting light onto a subject and capturing the reflection. It measures the time taken for the light to return, determining the distance of each point. These points, represented as XYZ coordinates, are collectively used to digitally reconstruct the object in a 3D space. I intended to use the structured light scanner as an alternative to Metashape software because it allows more control over the alignment process. For example, you can select two specific images you want to align and tell the software how you want them to get aligned. In addition, the scanner features a projector that sheds light on the project you’re scanning, as well as a calibrated background panel, allowing for greater detail to be picked up. 

HP 3D Structured Light Scanner

HP 3D Structured Light Scanner

A Bit of Scanner Surgery

Using the HP 3D Structured Light Scanner

Using the HP 3D Structured Light Scanner

The Makerspace’s HP scanner unfortunately hadn’t been functional in over three years. The camera was not working, and the scanner’s software could not make exports due to licensing issues. I updated the device’s software and installed new camera drivers, and in no time, the scanner was fully functional again. I then scanned the obsolete plastic object with the structured scanner. Unfortunately, the results were unsatisfactory. It resolved the prior alignment issue with Metashape, but the digital model had thin walls and holes on some of its surfaces, making it impossible to print. 

Thin walls and holes in the structured light scanner model

Thin walls and holes in the structured light scanner model

Building from the Ground Up with Fusion 360

Results of different lighting setting in HP 3D Structured Light Scanner

Results of different lighting setting in HP 3D Structured Light Scanner

After trying out different strategies with the HP 3D Structured Light Scanner, such as different light settings, but still not getting good results, David suggested a different method – building the model from scratch! Excited to try out new software (and get a break from the structured scanner!), I began exploring Fusion 360 tutorials and documentation. Autodesk Fusion 360 is a Computer-Aided Design (CAD) software with applications across various sectors, including manufacturing, engineering, and electronics. It allows one to create a simple sketch of a model and build it into a solid model with precise dimensions. You can even add simulations of real-world features such as material sources and lighting. 

Of course, this new, complicated, piece of software came with its challenges. For example, I had to know the dimensions of the fillets (the arcs) inside and outside my object. A little creativity combined with a pair of vernier calipers and a piece of paper did the job. Another challenge was understanding the timeline feature of Fusion 360, one of the most important features of the program, which allows you to record your progress and go back to a certain point. Researching online and getting help from a friend (shoutout to Oscar!) with more experience in Fusion 360 proved helpful in better understanding the software. 

Successful Fusion 360 model of the obsolete plastic object

Successful Fusion 360 model of the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

The Obsolete Plastic Object Was No Longer Obsolete

Finally, after several days of learning Fusion 360 and incrementally building a model, the obsolete plastic object was no longer obsolete. I produced an accurate model of the object and printed several copies, which Professor Koné was more than happy to receive. His files had regained their home, and time spent scouring eBay and Amazon for a nameless object had come to an end!

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

Conclusion

My experience working on photogrammetry and 3D modeling at the Makerspace was certainly full of twists and turns but definitely worth it. I learned how to use more than three very complicated software applications, significantly improved the Makerspace photogrammetry procedure (reduced a 3-month process to 1-2 days), and approached new challenges with an open mind.

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Next Steps

I look forward to exploring other methods of photogrammetry, particularly ones that require less equipment, such as those that use only a smartphone. Reality scan is one promising alternative that can create lower-resolution scans and models in less than 15 minutes. With new technologies coming out every day, there are many avenues to explore, and I’m excited to discover better methods. 

Screenshot: Experimenting with the Reality Scan smartphone app

Screenshot: Experimenting with the Reality Scan smartphone app

Truly-Local Internet: The PiBrary Project

Raspberry Pi 4 Model B

Figure 1: A Raspberry Pi Model B

If a local organization has important information for its neighbors, is there a way it can broadcast directly to them without bouncing the data to Toronto and back? I grew up here in the Berkshires, and have recently joined the Office of Information Technology (OIT) at Williams. Thinking about Williams College’s commitment to community service and support, my project goal was to demo a low-cost, low-maintenance device which a local organization could use to easily broadcast information and resources over WiFi directly to nearby cell phone users through familiar, standard methods — (1) connecting to a WiFi network, and (2) navigating to a website address in a browser — without needing national or global infrastructure, or specialized equipment or technical skills on either side of the connection. Such an “internet-in-a-box” model could have useful applications in emergency scenarios, but also could provide curated information or resources digitally to multiple people in other specific, time-limited places and moments — for example, at a festival, workshop, teach-in, or other community event.

Figure 1: Wilmington VT, a mere 40-minute drive from Williams.

Figure 2: Wilmington VT, a mere 40-minute drive from Williams.

Let’s give this idea some real context. Imagine a small town in nearby southern Vermont – say, Wilmington. It’s late August, and a storm rips through, dropping 8 inches of rain in 24 hours, washing out roads and bridges, and knocking cell towers and internet infrastructure offline, leaving you without any connectivity for days. Local fire, rescue, and police services, town government, even your electric company, typically use websites, social media, and text messages to communicate critical information — but now, those methods don’t work. Where can you go for information regarding emergency food, shelter, medical care? Is the water safe to drink? When will power be restored? Where are the downed power lines and flooded roads? You’re both literally and figuratively in the dark.

No need to imagine: this actually happened in 2011, with Tropical Storm Irene. Superstorm Sandy in 2012 presented a similar case. And just this April, a single fiber optic cable damaged by a late-season snowstorm shut down Berkshire businesses for a day.

Truly local connections literally do not exist on the modern Internet. Are you on campus and want to view the williams.edu website? That data lives on a server in Toronto, and travels through 6 or 7 intermediary servers (including places like New Jersey and Ohio) before it lands in your cell phone’s browser (also producing 0.5g CO2 each visit). Under normal conditions, this globalized infrastructure is reliable, and has important benefits. But it’s useful to think about the edge cases. Climate change is bringing more unpredictable severe weather events. Rural areas like ours are often underserved by internet service providers (ISPs), which often have little financial incentive to invest in maintaining or expanding infrastructure.

This post offers a guide to creating your own DIY hyper-local webserver. If you can write a webpage (in plain HTML) and are open to my guidance in using a command line: follow along!

Required Equipment and Steps

Required hardware: Raspberry Pi 4 Model B, Power Supply, PiSwitch, and 32GB MicroSD card with Raspberry Pi OS.

Figure 3: Required hardware: Raspberry Pi 4 Model B, Power Supply, PiSwitch, and 32GB MicroSD card with Raspberry Pi OS installed.

I decided to build using a Raspberry Pi 4 Model B single-board computer. The Pi is about the size and weight of a deck of cards, and runs a version of Linux, an open-source operating system (OS).

There were two tweaks I determined were necessary to make the Pi ready to play the role I imagined. First: I needed to enable the Pi to act as a webserver, rather than a desktop. Second: I needed to adjust the Pi’s built-in WiFi connection to broadcast, rather than receive, a WiFi signal.

Tweak 1: Webserver Setup

Globally, 30% of all known websites use Apache, an open-source webserver software launched in 1995. I installed Apache on the Pi through the command line, using the command:

sudo apt install apache2

Now, any content I wanted to broadcast to other users I could simply place into the preexisting folder at /var/www/html/. I wrote a home page, an “about” page, and created 4 subfolders loaded with some open-licensed content (PDFs, audio and video files). You can check out my content (and adapt it if you like!) at github.com/gpetruzella/pibrary.

Tweak 2: Adjusting WiFi to Broadcast

I then used the following on the command line to tell the Pi to broadcast as a WiFi hotspot:

sudo nmcli device wifi hotspot ssid <my-chosen-hotspot-name> ifname wlan0

(The Pi’s built-in WiFi device is named “wlan0”; I chose to name my hotspot “pibrary”.)

Now, the Pi was broadcasting a WiFi hotspot, which other devices would be able to see and connect to. But… I wanted to make sure this happened automatically every time the Pi was switched on. To accomplish that, I needed to find the new hotspot’s UUID, then use that in one final configuration step. I found the hotspot’s UUID by running:

nmcli connection

This displayed a table with multiple rows: I found the “pibrary” row and copied its UUID. Then, I ran:

sudo nmcli connection modify <pibrary’s UUID> connection.autoconnect yes connection.autoconnect-priority 100

With this modification completed, simply switching on the Pi will automatically start broadcasting a WiFi signal (as a “hotspot” or source), with no extra steps.

Connecting from a Nearby Mobile Phone

screenshot of the homepage at pibrary.local.

Figure 4: Viewing the homepage at pibrary.local.

Now, the Pi was both “serving” webpages, and broadcasting a WiFi hotspot. Even without Internet — such as during power outages — any nearby user could find and connect to the WiFi hotspot on their phone… but what “web address” would they type in the browser to reach the content? The final piece of the puzzle requires knowing the Pi’s “hostname”. When I first set up my Pi, I gave it the hostname pibrary (just like the hotspot). The domain name

.local

is a special-use domain name reserved for local network connections. So, once a cell phone has connected to the “pibrary” WiFi hotspot, that user can type

pibrary.local

into the browser to reach the homepage I had set up in Tweak 1. Finding your own Pi’s hostname is as easy as entering the following on the command line:

hostname

Experiencing the Local Website

Below are a few screenshot examples of navigating the PiBrary resources from an Android phone.

screenshot of streaming an open-licensed video.

Figure 5: You can stream an open-licensed video.

screenshot: accessing directories of open-licensed learning resources.

Figure 6: You can access directories of open-licensed learning resources.

screenshot: viewing a PDF.

Figure 7: You can view PDFs.

Challenges and Future Expansions

One limitation of this implementation is the range of consumer-grade WiFi: the maximum signal distance is roughly 90 meters under ideal conditions. The HaLow (802.11ah) standard offers up to 1km of range; but today’s consumer cell phones aren’t built to use that standard. One solution could use HaLow to send data from one Pi to another – say, one in town hall and another at a fire station (if each has an inexpensive HaLow module installed), with each one serving its own nearby neighborhood over standard WiFi. Alternatively, even off-the-shelf home mesh or WiFi “extender” hardware could improve the reach of this model without significant cost.

A second challenge: maintaining and editing content. My ideal use-case was for non-technical community members (e.g. in public safety or town government) to easily push information, announcements, etc. However, this demo succeeded because I knew how to edit and manage webpage content directly (i.e. by writing HTML). For a non-technical community member, using the bare Apache webserver this way could be a significant barrier to easy deployment or quick posting, especially in the environment of a public emergency. To address this, I would like to explore whether the YunoHost open-source server management application is compatible with the PiBrary project. YunoHost offers a very familiar and robust web editing interface, plus other possible additional services, such as email hosting.

In terms of sustainability, a lightweight Pi-hosted local site radically reduces the total carbon impact of each site visit, even taking into account the fact that Williams’ website is “hosted green”. A fascinating expansion of this project would be to use sustainable web design principles and standards as a standard part of the college’s digital presence.

Finally, privacy. Unlike ordinary internet browsing, which has many elements protecting and encrypting the flow of data, this demo creates a simple, direct, unencrypted WiFi network. (You may have noticed the “insecure alert” icon next to the address in some of the screenshots above.) In the absence of any technical trust guarantees, this setup is suitable only for very specific cases where the connection between server and client is based on human trust – like in a local community!

Thanks to David Keiser-Clark, Makerspace Program Manager, and to my colleagues on the Academic Technology Services team, for their support in developing this prototype.

Simulating Spaces with AR

Fig.1 This is me standing in front of Chapin Hall, using my tablet to view my AR model (see below) superimposed as a "permanent object" onto the Williams campus.

Fig.1 This is me standing in front of Chapin Hall, using my tablet to view my AR model (see below) superimposed as a “permanent object” onto the Williams campus.

At age nine, I had a bicycle accident (and yes, for those who know me, I can’t swim, but I can pretty much ride a bike, thank you!). It was not that unusual compared to how you usually fall from a bike: I was going up perhaps faster than what my mom allowed me at the time, and I bumped into a really, really, BIG rock. In great pain, someone nearby picked me up and, crying very much, I said: “I want to go home, give me my tablet.” A very Gen-Z answer from me, and I don’t recommend that readers have such an attachment to their devices. But let’s be honest—would I have been in such a situation at the time if I was peacefully playing the Sims instead of performing dangerous activities (such as bike riding) in real life? Is there a fine line between real and virtual? Can I immerse myself in a virtual environment where I *feel* like I drive without actually driving *insert cool vehicle*?

 

 

Fig. 2: I created this sketch of "maker space" in Procreate on my tablet.

Fig. 2: I created this sketch of “maker space” in Procreate on my tablet.

Augmented Reality (AR) is something I have been interested in learning more about as an internet geek. Although I count stars for a living now (I am an astrophysics major), I am still very much intrigued by the world of AR. Whenever there is a cool apparatus in front of me, I take full advantage of it and try to learn as much as I can about it. That’s why one of my favorite on-campus jobs is at the Williams College Makerspace! It is the place where I get to be a part of a plethora of cool projects, teach myself some stuff, and go and share it with the world (i.e., as of now, the College campus and grand Williamstown community!). Fast forward to my sophomore year of college, Professor Giuseppina Forte, Assistant Professor of Architecture and Environmental Studies, reached out to the Makerspace to create a virtual world using students’ creativity in her class “ENVI 316: Governing Cities by Design: the Built Environment as a Technology of Space”. The course uses multimedia place-based projects to explore and construct equitable built environments. Therefore, tools like Augmented Reality can enhance the students’ perspectives on the spaces they imagine by making them a reality.

This project could not have been possible without the help of the Makerspace Program Manager, David Keiser-Clark. He made sure that there was enough communication between me and Professor Forte so that deadlines for the in-class project completion were met, as well as the Williams College “Big Art Show”. In short, my role was to help students enhance their architectural designs with augmented reality simulations. This process involved quite a few technical and creative challenges, leading to a lot of growth as a Makerspacian, especially having no background in AR before taking part in this project!

Choosing Tools and Techniques

My role in this project was to research current augmented reality softwares, select one, and then teach students in the course how to utilize it. In consultation with Giuseppina and David, we chose Adobe Aero because it’s free, easy to use, and has lots of cool features for augmented reality. Adobe Aero helps us put digital stuff into the real world, which is perfect for our architectural designs in the “ENVI 316: Governing Cities by Design” course. I then set up a project file repository and inserted guides that I created, such as “Interactive Objects and Triggers in Adobe Aero” and “How to Use Adobe Aero”. This documentation is intended to help students and teaching assistants make their own AR simulations during this — and future — semesters. This way, everyone can try out AR tools and learn how to apply them in their projects, making learning both fun and interactive.

AR Simulations: My process

Fig. 3: I have successfully augmented reality so that, viewed through a tablet, my "maker space" 3D model now appears to be positioned in front of Chapin Hall at Williams College.

Fig. 3: I have successfully augmented reality so that, viewed through a tablet, my “maker space” 3D model now appears to be positioned in front of Chapin Hall at Williams College.

Once we had all the tools set up with Adobe Aero, it was time to actually start creating the AR simulations. I learned a lot by watching YouTube tutorials and reading online blogs. These resources showed me how to add different elements to our projects, like trees in front of buildings or people walking down the street.

Here’s a breakdown of how the process looked for me:

  1. Starting the Project: I would open Adobe Aero and begin a new project by selecting the environment where the AR will be deployed. This could be an image of a street or a model of a building façade.
  2. Adding 3D Elements: Using the tools within Aero, I dragged and dropped 3D models that I previously created in Procreate into the scene. I adjusted their positions to fit naturally in front of the buildings.
  3. Animating the Scene: To bring the scene to life, I added simple animations, like people walking or leaves rustling in the wind—there was also the option to add animals like birds or cats which was lovely. Aero’s user-friendly interface made these tasks intuitive, and videos online like this one were extremely helpful along the way!
  4. Viewing in Real-Time: One of the coolest parts was viewing the augmented reality live through my tablet. I could walk around and see how the digital additions interacted with the physical world in real-time.
  5. Refining the Details: Often, I’d notice things that needed adjustment—maybe a tree was too large, or the animations were not smooth. Going back and tweaking these details was crucial to ensure everything looked just right. Fig. 1, 2 & 3 show an example of a small project I did when I just started.

Final Presentation: The Big Art Show

Figures 4 and 5 show side-by-side comparisons of real-life vs AR spaces as presented in the Williams College “Big Art Show” in the fall semester 2024. The student who used the AR techniques decided to place plants, trees, people and animals around the main road to make the scene look more lively and realistic. 

Fig. 4: Exhibition at the "Williams College Big Art Show" featuring 3D printed houses and buildings alongside a main road.

Fig. 4: Exhibition at the “Williams College Big Art Show” featuring 3D printed houses and buildings alongside a main road.

Fig. 5: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Fig. 5: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Lessons Learned

Reflecting on this project, I’ve picked up a few key lessons. First, jumping into something new like augmented reality showed me that with a bit of curiosity, even concepts that seem hard at first become fun. It also taught me the importance of just trying things out and learning as I go. This project really opened my eyes to how technology can bring classroom concepts to life—in this case, the makerspace!—making learning more engaging. Going forward, I’m taking these lessons with me.

TIDE Grant: Sustainable and Reusable STEM Learning Kits for Students in Under-Resourced 5th and 6th Grade Classrooms

Written by Divine Uwimana ’27 and Elena Sore ’27

Introduction

Our latest prototype of the car includes a winding mechanism, which will act as an additional modification of the base kit.

Our latest prototype of the car includes a winding mechanism, which will act as an additional modification of the base kit.

In an ideal world, students would have equal access to education, but that isn’t the case. While some schools have the latest learning technologies, hands-on opportunities, and all the funding they need, others are trying to give students the highest quality education they can without access to these resources. Worst of all, the schools negatively impacted are often in historically underrepresented communities, often ones with large populations of people of color, perpetuating a cycle of poverty. While brainstorming ways of helping our local communities as part of the TIDE Grant (Towards Inclusion, Diversity, and Equity Grant) proposal, providing more equitable access to STEM education was a clear way we thought we could make an impact. Building these STEM kits is a way we Williams students can use our education and access to help build up the community around us.

What is Hands-On STEM Education?

Hands-on STEM education uses physical interaction to provide real-world experiences that help reinforce the concepts being taught. While they can be helpful to the learning process, these experiences are often expensive. Whether it’s premade kits that can cost upwards of 40 dollars a student or involve costly field trips, these experiences often don’t fit within the budgets of schools. This disparity is critical to solve because studies have shown that hands-on learning opportunities help students retain what they learn better than standard learning methods such as lecturing. The problem is exacerbated in the education of younger students (K-6 range) because younger children’s lower attention spans can cause them to lose focus more quickly in the absence of active and experiential pedagogy.

This problem doesn’t only exist in a classroom setting. Many attempts have been made to bring hands-on learning to the home as supplemental education and homeschooling tools; however, cost is even more of a problem here. One of the largest companies currently producing these kits for home use is Crunch Labs. While they are similarly priced, averaging around $30 a kit, the requirement to purchase a monthly subscription typically results in costs of $300 or more per child. Also, Crunch Labs and other kits built for a home environment are often not reusable.

Access to hands-on STEM education is so important because high-quality STEM education improves students’ creativity and problem-solving skills. Research has shown that exposing kids to STEM in elementary school – especially between the first and third grades – provides students with the foundation they need to succeed in STEM-field careers. According to the research, U.S. adults with 1-2 years of experience in the workforce have reported the highest exposure to STEM concepts in elementary school. Between the ages of 5 and 8, 46% of this population experienced a STEM-related track in school and 53% of this population currently works in a job that is either entirely or heavily involves STEM – by far the largest percentage of any sector of jobs in the workforce. This suggests that exposing students to STEM at a young age captures their imagination and keeps them interested in science, technology, engineering, and math jobs early in their careers.

As student workers in the Makerspace, Divine Uwimana ‘27 and I, Elena Sore ‘27, met and collaborated with Paula Consolini, Adam Falk Director of the Center for Learning in Action, Tanja Srebotnjak (Director of the Zilkha Center for Environmental Initiatives), and David Keiser-Clark (Makerspace Program Manager). We identified a few critical criteria for STEM kits:

  1. Our STEM Kits need to be as low-cost as possible to produce. To ensure this, we must find creative ways to reduce material usage and implement supplies students may already have in their classrooms into the kits.
  2. We must design STEM kits to leverage existing lesson plans and learning requirements to ensure that the STEM kits fulfill the educational needs and standards set out by organizations like the Department of Education. 
  3. The STEM Kits must be designed to be reusable, durable, and sustainable, using sustainably sourced and produced materials wherever possible.

Brainstorming

Divine and I began the brainstorming process by researching existing STEM kits currently available on the market and how we might further improve them for our demographic group with respect to the aforementioned criteria. Since we both had little experience in the field beforehand, we wanted to understand better the design features other organizations used to create highly engaging STEM kits. Some of the qualities we observed that we believe we should replicate are listed below:

  • A good STEM kit is highly interactive. Parts of the kit, especially mechanical parts, should be designed so that students can visually see what is happening and how the action they are putting in is causing the final result.
  • A good STEM kit should not be a “one and done.” Ideally, a STEM kit will have multiple stages that allow students to build upon a product in stages, introducing new concepts or building on previous concepts.
  • A good STEM kit should be a manageable length. Even if students are having fun, dragging it out too long risks boring the students and causing the learning aspect to be ineffective.
  • A good STEM kit should be fun yet educational. This means balancing the kit to both be rich in academic concepts and interesting to keep them engaged.
  • A good STEM kit should encourage teamwork and cooperation. It should allow kids to work together to build their social skills while learning.
  • A good STEM kit should allow “trial and error.” It should enable the kids to learn from mistakes and thus build their problem-solving skills.
  • A good STEM kit should be simple yet visually complex. Just because the final mechanism is a complex contraption doesn’t mean the process of assembling it can’t be simplified and streamlined.
Front and back views of the mechanical scotty dog kit from Carnegie Mellon University.

Front and back views of the mechanical scotty dog kit from Carnegie Mellon University.

During our design process, we also got to experience assembling a STEM kit first-hand, specifically the mechanical Scotty dog kit we received from Carnegie Mellon University, courtesy of Professor Bill Nace and Professor Robert Zacharias. The materials used to assemble it are easy to manufacture, primarily made of thin sheets of wood and acrylic with 3D-printed plastic parts. The design is simple but very interesting; a single motor in the middle drives both the tail wagging on the back and the head bobbing on the front through a system of gears on the back. The head is made to bob up and down in a specific pattern through the radius of the spinning piece increasing or decreasing as it turns, creating a pattern of head movements that feels random. The tail spins on an arm and is locked upright using a bracket, making the tail wag back and forth with a simple spinning motion. Finally, all of this is controlled with a light sensor, allowing the user to control the speed of the motion by raising or lowering their hand above it. All these mechanisms combined to create a fascinating kit from a design standpoint, with a lot of interactivity and interesting mechanisms on display while being very quick for us to reassemble, even without instructions.

From this experience, we better understood how to design an effective STEM kit. Then, we started brainstorming ideas for STEM kits that we could create. At the end of this brainstorming, we ended up with three designs we wanted to develop further. The first is a model car, which would use a wind-up mechanism built by students to showcase the properties of potential and kinetic energy. The second idea is an energy kit expansion for the car, allowing students to electrify it while teaching them the basics of electricity and explaining renewable solar energy concepts. Finally, the third idea is a solar system kit, which would be focused on having students assemble a solar system model to teach about the planets in our galaxy and our place in the universe. With these initial ideas, we started prototyping the model car kit.

Prototyping the Model Car Kit

An initial prototype for the base car kit, giving us an idea of what the final product may look like.

An initial prototype for the base car kit, giving us an idea of what the final product may look like.

The main idea of our wind-up car kit was simple. But, as with many projects, it quickly evolved into a complex design with many digital iterations and three 3D printed prototypes. For this first design, a 3D printed base would connect the two cardboard sides and help support the back axle, which would wind up using a rubber band attached to it and the frame. Wooden dowels would act as axles and bottle caps as wheels, so when you pulled it back, the car would launch forward using energy stored in the rubber band. 

While this was a great initial idea, we encountered some problems. First, cutting out the sides made of cardboard proved difficult because two holes needed to be cut in the middle of it for axles. Ultimately, we decided that the side pieces should be replaced with laser-cut wood in the final design, which would be reusable and easier for kids to work with while providing more structural rigidity. Another issue we discovered was that the rubber band would stay on the axle instead of coming unhooked at the end, catching it, and abruptly stopping the car. Our solution was to move the hook point for the rubber band forward so it had enough energy to detach itself from the axle at the end. We also had to ensure this expansion didn’t use too much plastic, as we hope to create all the filament ourselves using recycled PET from locally gathered plastic bottles. We ended up using a honeycomb pattern, often seen in structures that use empty space to save material resources while retaining structural integrity, and by implementing this we were able to save sufficient plastic such that the larger prototypes consumed less plastic than our smaller initial prototype.

Our first three prototypes for the 3D printed base, showing how it evolved to meet the project's needs while remaining efficient in plastic usage.

Our first three prototypes for the 3D printed base, showing how it evolved to meet the project’s needs while remaining efficient in plastic usage.

For our third prototype, we rounded and smoothed as many parts as possible to prevent sharp points or edges that can occur in 3D printing. We also did this to prevent sharp points from catching or breaking the rubber band. Finally, we modified the slot at the front for the rubber band to help the car retain it, even after it detaches from the axle.

The biggest problem we ran into was not with the design of the base but with the kit itself. Our initial idea was interesting but violated one of our initial design rules. The kit was just one thing: assembling the car with the rubber band. If we wanted to make an exciting kit, we had to make at least one additional stage involving more engineering and differently demonstrating the concepts of potential and kinetic energy. 

While looking for inspiration, we stumbled upon a design by a maker named Greg Zumwalt for a 3D Printable Wind-Up Car that used a simple mechanism to limit the speed, allowing it to move farther and longer after windup as opposed to a design like ours, which simply went at top speed after release. Looking into this project’s mechanics, we realized that a similar design could be perfect to demonstrate the ways energy can be modified in the process of converting from potential to kinetic energy. So, to better understand how the mechanics worked, we downloaded the files and began printing them out to design a similar mechanism within the constraints of our model kit.

It was at this moment that the Office of Institutional Diversity, Equity, and Inclusion announced that our application for a TIDE grant was accepted and that our STEM kit project would be funded. 

Next Steps

Our next steps are to complete the second expanded energy source for our car prototype, align that with curricular concepts, and then meet later this month with an elementary school teacher to share our project and hear initial feedback. We plan to incorporate that feedback into the car prototype and then next meet with that teacher’s class and observe student reactions to utilizing it. As we continue to build several STEM kits, our theme will be to test, demonstrate, observe, seek feedback, iterate, and repeat. We hope these kits might have a significant impact on elementary students’ education in the Berkshires.

Beyond Board Games: Exploring a 3D Printed Catan Boards’ Role in Creativity, Connection, and Vulnerability

Introduction

Top view of hexes (and an easter egg in the sheep tiles!).

Top view of hexes (and an easter egg in the sheep tiles!).

Few technologies capture the imagination like 3D printing. The ability to bring digital designs to life and hold them in our hands ignites a creative spark within us–or maybe just me. One of my first encounters with detailed 3D printed objects was at the Berkshire Innovation Center (BIC), an organization in Pittsfield, MA dedicated to investing in the local community. BIC’s passion and ability to embody childlike wonder left a lasting impression, particularly in the form of a blue, square-shaped chainmail pattern. Defying its angular components’ design constraints, the chainmail moved with remarkable fluidity, which was fascinating to a person like me with a strong spatial and tactile memory. It was incredible to see where negative space was needed for movement and the precision with which the chainmail was printed. This is where the allure of 3D printing lies with most people–the ability to transform concepts into tangible objects. Even seeing others’ projects can have a profound impact on creativity.

This all brought me to my dear friend Mo (Mohammad Faizaan ‘23). As I sat in Lee’s booth waxing starry-eyed over a 3D printed Catan board I saw online, he mentioned that he had experience with Williams’ Makerspace and could help make this dream a reality. (Thank you, Mo!)

Interpersonal Connectivity of Catan

3D printed Catan hexes, complete with my favorite detail--red silos for wheat storage.

3D printed Catan hexes, complete with my favorite detail–red silos for wheat storage.

Beyond its status as a game, Catan offers valuable lessons applicable to real life. While the basics of resource management and investment strategies are readily apparent, the game’s social dynamics are equally intriguing. Depending on the group of players, the game can take on vastly different tones. On one hand, I have a group that is very much into competitive play (you know who you are 😉) and is driven by the idea of winning at whatever cost, which features more individualistic motives and trading futures (because…you know…Williams). On the other hand, my preferred collaborative-based play has been lovingly dubbed “socialist Catan”–prioritizing mutual trades, collective advancement, and the fun of the game. But regardless of which group I play with, it’s always part of the fun for me to observe how different players navigate these dynamics and how they adapt to each situation—when to use the stick and when to use the carrot—which provides insights into an array of problem-solving approaches and interpersonal dynamics (and yes, I’m a psychology major).

The Joy of Sharing Worldbuilding

My daughter and I would paint on the floor and take pictures to remember the colors we used.

My daughter and I would paint on the floor and take pictures to remember the colors we used.

What started as a pursuit of visual appeal and a quirky gameplay experience unfolded into a heartwarming journey of discovery with my three-year-old. Stepping back from strategy, painting the stark white 3D printed pieces became an exploration of the ‘big picture.’ Discussing color, the significance of a base layer for depth, and her inquiry about why I painted the “pointy trees” one color and the “round trees” a different color led to conversations about the different types of trees and their similarities and differences. This colorful journey became a means for her to develop a general understanding of Catan’s terrains, insights into each terrain’s unique elements, and why they were crucial for settlement–to the point where she ensures each sheep hex touches a wheat hex “so they can eat!” I even snuck a little geometry in there, and, to this day, she proudly proclaims that “hexagons are the bestagons” (fun link if you’re interested!). Beyond strategy, economics, art, geography, and math, the process was a rich opportunity for sharing experiences, bonding, and transmitting knowledge to the next generation.

Struggles with painting and being vulnerable (but mostly the vulnerability part)

The detached tree hex still counts as lumber, so at least we're not 'missing the forest for the tree.'

The detached tree hex still counts as lumber, so at least we’re not ‘missing the forest for the tree.’

I am no artist. This admission is not fueled by self-deprecation but rather an acknowledgment of my pursuit to overcome a slight strain of perfectionism. This project has been fun…and stressful. Even when David saw the finished product and expressed his admiration, encouraging me to write this blog post and share my experience with all of you, my initial response was tinged with embarrassment. The echoing thought in my mind: “It’s not good enough.” Those pesky white spots that were just surprisingly difficult to get paint to, the accidental detachment of a tree (oops), the crooked lines, and the colors that didn’t quite achieve a perfect harmony. It all seemed like a lot. 

I am also no blogger! Posting this article is even more terrifying! Sharing imperfect paintings is one thing, but sharing imperfect words?! Terror! Sharing this with you all is challenging for me. It shines a spotlight on my areas of vulnerability, whether it’s the brushstrokes that miss their mark, the sentences that might not be as polished as I’d like, or even my experiences as a parent and student. But if I tell my daughter, “You can do hard things, ” then I can too. So I hope that this post can shine a light on the amazing capabilities of the Makerspace and encourage a few of you to see what it has to offer. They are all wonderful people who are excited to help you discover a few new facets of yourself! 

Thanks for reading.

(3D printing files can be found on Thingiverse by creator JAWong.)

Thanks to David Keiser-Clark, Makerspace Program Manager, for providing me an opportunity out of my comfort zone, the patience to wait until I felt ready to post this, and allowing me to share my wacky love of 3D printing, games, and my life side-quest of normalizing vulnerability.

Before printing the 3D borders, but we were eager to play!

Before printing the 3D borders, but we were eager to play!

Architecture in Slices: 3D Printing for the Big Art Show.

The Arts 314 exhibit in the Big Art Show

The Arts 314 exhibit in the Big Art Show

In my first Makerspace academic project, I jumped into the deep end. My role was to support Giuseppina Forte, the Assistant Professor of Architecture and Environmental Studies, and her students by preparing exhibition materials for the end-of-semester campus Big Art Show. I supported her two studio arts classes “Design for the Pluriverse: Architecture, Urban Design, and Difference” (ARTS 314/ENVI 310) and “Governing Cities by Design: the Built Environment as a Technology of Space” (ARTS 316/ENVI 316). For ARTS 314, her students designed an architectural model of an outdoor community building, and for ARTS 316, they re-envisioned the Cole Avenue Rail Yard area of Williamstown into a river-side park. My role was to convert the students’ digital architectural designs into 3D-printed objects. What seemed straightforward quickly became a challenging—and amazing—learning experience filled with challenges and growth that I want to share. 

Prototyping

The first of many difficulties arose when I sliced, or readied, the models for the 3D printers. First, some files seemed to have problematic features deeper than the abilities of the FlashForge and Prusa slicer software repair algorithms. So, I spent some time learning MeshMixer and how to identify the Achilles heels of the models. In most cases, manually widening thin connections was sufficient. Second, some prints seemed impractical, if not entirely impossible. In some cases, these impractical features were easily removable without destroying the final product, like thin columns on B3. In others, features were inherent to the design, such as with A1, which posed a challenge for 3D printing due to its elevated, thin, and intricate spiral design. Finally, some prints, like B2, would just take an incredibly long time to print – up to 60 hours.

The models I would print. From top left to bottom right: A1, A2, A3, A4, A5, S1, S2, B1.

The models I would print. From top left to bottom right: A1, A2, A3, A4, A5, S1, S2, B1.

A prototype of A2

A prototype of A2

In a typical project, I would prototype each print and present them before starting any final prints. This helps to set expectations for what a 3D print looks like, how the pieces go together and allows me to get feedback on the prints. However, these prints proved particularly challenging to prototype for the above reasons. While I could get a couple of iterations of the simpler prints, many prints proved difficult to scale down due to their small and intricate details, and, in my mind, no prototype is worth 40 hours or 100 meters of filament because of the likelihood of repeated failed prints.

Crunch Time

For prints with exposed, flat surfaces like A4, printing upside down provided a smoother finish and allowed the prints to peel off of the plate more consistently

For prints with exposed, flat surfaces like A4, printing upside down provided a smoother finish and allowed the prints to peel off of the plate more consistently

However, dilly-dallying in this “half-prototyping” stage created a problem. Since I was hesitant to review an incomplete set with everyone, I mentally stayed in the prototyping phase, not starting any of the final prints. Instead, I spent this time optimizing the prints that I hadn’t been able to prototype. I ran tests to maximize the quality of the print while minimizing the filament used. While I can’t say that this time was wasted, since many of the optimizations helped me later, in hindsight, I wish that I had paid more attention to the time and started my final prints sooner, as I could have prevented much of the stress in the final time crunch.

 

 

I found that I was able to go as low as 8% infill on solid prints before jeapordizing structural integrity.

I found that I was able to go as low as 8% infill on solid prints before jeapordizing structural integrity.

The final week and a half of the project was a combination of epic stress and stellar production. It started with David Keiser-Clark, the Makerspace Program Manager, asking me if I thought it would be possible to finish and deliver the prints before the start of the show in nine days. I panicked. I had become so immersed in solving the technical issues that I had lost track of the delivery date. I sat down and figured out that the total printing time for this project would take ~240 hours. Had I immediately started two prints on the two working printers and ran them 24/7, the prints would have only finished three or four days before the deadline. I immediately put two prints on the printer and responded to David, cautiously telling him I thought I could finish them in time. 

A spectacular failure of one of the prints

A spectacular failure of one of the prints

My estimates couldn’t have been more wrong. My first two prints should have been relatively quick and easy, but when I returned to collect them I was greeted by two spaghettified clumps of white PLA. I reran both prints, praying that they were flukes, but of course, they weren’t. Within 5 minutes, both prints had failed again. I did a 20-minute calibration of both printers and reran the prints: the print in the FlashForge was successful, but the Prusa failed again. Time was slipping away, and only one printer was operating reliably. 

 

Removing Roadblocks

All four printers running smoothly!

All four printers running smoothly!

I reached out to David and explained the issue. He helped me configure the two out-of-commission Dremel printers, which seemed to be my saving grace. However, I transferred my slices to the Dremel and found that many of the round prints were larger than the Dremel’s base plate. This, combined with the fact that the Dremels struggled with finer detail in test prints added to my stress. However, after examining the models, I found that I could cut the larger files into smaller pieces, print them, and then later assemble and permanently glue them together. 

The final print of A2 and the tops of S1 and S2, unfortunately printed in different sizes.

The final print of A2 and the tops of S1 and S2, unfortunately printed in different sizes.

Six days before the show we had four working printers. The Prusa had been fixed (twice) and was churning out the finer detailed prints. The FlashForge was working on a piece of the largest print, which I had cut down to 30 hours (from 48) by increasing the layer height to the maximum of 0.3mm (75% of the nozzle diameter). Both Dremels were printing the remaining pieces of the largest print and we had received permission to use the Science Shop’s Ultimaker for A1, which was the most challenging, longest-running, and most likely-to-fail print in the entire project. For a moment, it looked as if the project would be done comfortably in time, with several days of cushion to spare.

Using natural supports used less filament, took less time, and failed less than vertical supports

Using natural supports used less filament, took less time, and failed less than vertical supports

One day later the situation flipped on its head. The filament for the Ultimaker, ordered in advance, failed to arrive. Three prints in the Makerspace failed. The filament roll on the FlashForge got tangled and caused a jam, the Prusa had spaghettified, and one Dremel printed the house sans the roof. I was able to find and solve a problem within the Dremel slicer software and recalibrate the Prusa, but for now, the FlashForge was out of commission. 

In hindsight, I had not anticipated the variance in scaling among different slicing softwares. The Dremel software defines its x-axis differently than the FlashForge software, which resulted in pieces that scaled poorly with the rest of the model. 

A copy of A1 printing on the FlashForge 1 day before delivery.

A copy of A1 printing on the FlashForge 1 day before delivery.

Three days before the show, I had somehow managed to print A2, A3, A4, A5, B1, and B3. We fixed the Dremel and set the most structurally fragile and complicated print (A1) to run overnight on all four printers. This would be our last chance. 

One day before the show, our final prints were completed: the Prusa and FlashForge succeeded, while both the Dremels failed. Of the two successful prints, the Prusa created a beautiful, highly detailed print. Unfortunately, I woke up with the flu and didn’t get to say goodbye to the prints, nor could I go to the Big Art Show. However, I got to see pictures and I was proud to support the students’ architectural work for the show, but, to me, the greatest value of this project was not in the prints themselves, but in the lessons that I learned and that I will take with me into my future work both in and out of the classroom. Specifically, I developed confidence in my ability to solve technical problems in a new medium while working under pressure and improved my capacities in project management.

The final collection of pieces

The final collection of pieces

Murphy’s Law

The Arts 316 exhibit in the Big Art Show

The Arts 316 exhibit in the Big Art Show

Murphy’s Law states that when something can go wrong, it will. Doubly so when you are under a time crunch. In hindsight, most of this pressure could have been avoided had I made an effort to timeline the project before the due date was imminent. When printing, you have to strike a balance between quality, material used, and time. Before the time crunch, I was trying to maximize quality and minimize the material used. However, the instant time became the driving factor, I swapped those priorities. All in all, it worked out, but if I had managed my time better I likely could have delivered just as good of a final product with less stress. 

Post Mortem

During this project, I discovered how fragile 3D printers are. We had four printers in the Makerspace, and I had to do a total of eight mechanical fixes. At some points, I felt completely defeated. It seemed like every successful print was counterbalanced by an awful grinding sound or a jammed PLA feed. This was not the first time I had ever 3D printed, but it was my first time tinkering with 3D printers. Admittedly, at the start of the project, I was so scared of breaking something that I barely opened the side panel before asking for help. The silver lining of the printers breaking so often was that I had the opportunity to learn how to fix them. During the project, David took a few hours to show me around each printer, explaining how they work and where they usually fail. This paid itself off in dividends. By the end of the project, I was more than comfortable repairing every single printer we had and reached a point where I didn’t even have to tell David when they were broken, likely saving him more time than it took to help me figure out how all of them work. I’m excited to take this experience and apply it to my next faculty project in the Makerspace.

Pixels or Petals? Comparing Physical vs. Digital Learning Experiences

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Fig. 1: Isabelle Jiménez and Harper Treschuk outside the Williams College Makerspace located in Sawyer 248

Learning has not been the same since COVID. Just like the vast majority of students around the world, my classes were interrupted by the COVID pandemic back in 2020. After having classes canceled for two weeks, and in an effort to get back on track, my high school decided to go remote and use Google Meet as an alternative to in-person learning. Remote learning did not feel the same — this included using PDF files instead of books for online classes, meeting with peers over video conferencing for group projects, or taking notes on my computer and studying only digital material for exams. I cannot say that I was not learning, because that would not be the best way to describe it, but I can say that something rewired my brain and I have not been able to go back. Due to COVID and other factors, the use of simulations in schools may increasingly supplant hands-on learning and more research needs to be done not only on the implications for content knowledge but also for students’ development of observational skills.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Fig. 2: Sketchfab provides a digital view of the 3D model of a lily, accessible via an iPad interface. This interface allows the children at Pine Cobble School to engage with and explore the object in a virtual environment.

Last week, Williams College students Isabelle Jiménez ‘26 and Harper Treschuk ‘26 visited the Makerspace to start a project for their Psychology class, “PSYC 338: Inquiry, Inventions, and Ideas” taught by Professor Susan L. Engel, Senior Lecturer in Psychology & Senior Faculty Fellow at the Rice Center for Teaching. This class includes an empirical project that challenges students to apply concepts on children’s curiosity and ideas to a developmental psychology study. Isabelle and Harper decided to analyze the ideas of young children following observations with plants, more specifically: flower species. The students plan to compare how two groups of similarly aged children interact with flowers. The first group will interact with real flowers and will be able to touch and play with the plants (Fig. 1), and the second group will interact with 3D models of the plants using electronic devices (iPads) that enable them to rotate and zoom in on the flowers (Fig. 2).  By analyzing the interactions of children with real and simulatory flowers, they hope to extend existing research on hands-on and virtual learning to a younger age range. Valeria Lopez ‘26 was the lead Makerspace student worker who assisted them in creating the necessary models which will be covered in this blog post. 

I was excited to learn about Isabelle’s and Harper’s project and quickly became involved by assisting them in using Polycam 3D, a mobile photogrammetry app. This app enabled us to quickly create three-dimensional digital models of physical flowers. We opted for photogrammetry as our method of choice due to its versatility—it can model almost anything given enough patience and processing power. Photogrammetry involves capturing a series of photos of an object from various angles, which are then processed by software to create a coherent three-dimensional digital model. To meet our project’s tight deadline, we decided to experiment with smartphone apps like RealityScan and Polycam, which offer a user-friendly approach to 3D object creation. While our standard photogrammetry workflow in the Makerspace provides greater precision, it requires more time and training because it uses  equipment such as a DSLR camera, an automated infrared turntable, a lightbox, and Metashape software for post-processing. Despite initial setbacks with RealityScan, we successfully transitioned to Polycam and efficiently generated 3D models. These models serve as educational resources for children, and since precise accuracy wasn’t necessary for this project, using a mobile app proved sufficient. This rapid approach ensures that the 3D models will be ready in time for the educational teach-in Isabelle and Harper are organizing at Pine Cobble School.

Process

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

Fig. 3: This scene features a daffodil placed atop a turntable, all enclosed within a well-lit box to enhance visibility and detail.

We began our project by utilizing the photography equipment at the Makerspace in Sawyer Library to capture images of flowers in vases. Initially, we were careful to avoid using the provided clear glass vases because translucent and shiny objects are more difficult for the software to render correctly into accurate models. With the guidance of David Keiser-Clark, our Makerspace Program Manager, we selected a vase that provided a stark contrast to both the background and the flowers, ensuring the software could differentiate between them (Fig. 3 & 4).

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower's movement.

Fig 4: In the foreground, a phone is mounted on a tripod, positioned to capture the flower’s movement.

Setup

Our setup involved placing the flowers on a turntable inside a lightbox and securing the smartphone, which we used for photography, on a tripod. 

Troubleshooting

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Fig. 5: Isabelle and Valeria (Makerspace student worker who participated in this project) analyze the 3D models in Polycam.

Our initial approach involved seeking out a well-lit area with natural lighting and placing the plant on a table with a contrasting color. However, we soon realized that the traditional method of keeping the phone stationary while rotating the subject wasn’t optimal for smartphone-designed software. While this approach is commonly used in traditional photogrammetry, our mobile app performed better with movement. Recognizing this, we adjusted our strategy to circle the subject in a 360-degree motion, capturing extensive coverage. This resulted in 150 pictures taken for each flower, totaling 450 pictures. Despite initial setbacks with two different photogrammetry apps, our second attempt with Polycam proved successful, allowing for more efficient and accurate processing of the models (see Fig. 5).

Results

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

Fig. 6: An alstroemeria flower model, which is one of the final models uploaded to SketchFab. The users will be able to interact with the object by rotating it in a 360 degree manner.

We did not expect to need to do so much troubleshooting! In all we spent 45 minutes loading and testing three different apps, before settling on one that worked successfully. We are extremely happy with the end results. As a final step, I uploaded our three models to SketchFab to ensure that the children could easily access them across different devices (Fig. 6).

Next Steps

  1. Engage with Isabelle and Harper to gather their general impressions on the kindergarteners and first graders’ interactions with the real and digital 3D models while still maintaining complete confidentiality of the results.
  2. Take the opportunity to delve deeper into mobile photogrammetry tools and document the process thoroughly. Share this documentation with other makerspace student workers and the wider community to facilitate learning and exploration in this area. 
  3. Collaborate with other departments on similar projects that utilize 3D objects to enhance educational experiences, fostering interdisciplinary partnerships and knowledge exchange.

Postscript (May 10, 2024)

Isabelle and Harper report that their educational teach-in at Pine Cobble School using the 3D flowers was a success:

The students were all able to rotate them and zoom in and out. We noticed that as expected students in the virtual condition reported visual observations while students in the physical condition reported tactile observations as well (but no observations about smell) — interestingly, this didn’t affect the number of observations between the conditions. Students were engaged with the materials although for a couple students we wondered if they became enraptured with the iPad rather than the task of observation itself — they were zooming out so far in order to make a flower disappear. Thanks again for your collaboration and support on this class project. We are interested to hear if the Makerspace decides to partner with the folks at the Cal Poly Humboldt Library in the future.

Whittle by Whittle: Zilkha Center Garden Signs 

When I was a prospective student, I recall my host bringing me near the Class of 1966 Environmental Center (“Envi Center”) to meet some of their friends. While passing through, I noticed a group of students picking apples from a tree and pulling weeds in the garden beds. As I took an apple from their bin and had a bite, I was incredibly overjoyed to see a garden after just having started one at my high school. Now, as a student and summer intern, I had the opportunity to see the hard work that goes into the maintenance to make the gardens a community space for all. This is why, when Christine Seibert, the Sustainability Coordinator at the Zilkha Center, reached out to the Makerspace for a project to make signage for the Envi Center gardens, I jumped at the opportunity to support this project!

Garden Beds behind the 1966 Environmental Center

Pre-project photo of the Garden Beds (without signage) behind the 1966 Environmental Center

The garden beds are an integral part of the Envi Center. Under the Living Building Challenge certification, the building is required to operate as a net-zero energy and water space, with 35% of the surrounding land area in food production. The beds are supported by the Center for Environmental Studies (CES) and the Zilkha Center (ZC), and maintained by ZC interns and Williams Sustainable Growers (WSG). Additionally, Landscape Ecology Coordinator Felicity Purzycki advises overall orchard maintenance.

These gardens provide opportunities for community building, food production, and help teach students new skills. With these goals also come challenges. While talking to Christine about the signage project, she mentioned how garden interns already have a lot to do maintaining the gardens. This has made it difficult to find bandwidth to create signage about what is being grown and share meta information about the gardens. In addition, the current wood cookies used for signage are beginning to fade. For more than four years, the Zilkha Center has wanted a more permanent and prominent solution to identify and distinguish plants grown; this will also help ZC interns and other people to know what is ready—or not—to pick. The new signage will cover three areas: identifying the perennial and annual plants, teaching people how to use the gardens through the honorable harvest, and when certain items are ready to be picked. 

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

Inspiration was taken from a project recently completed by Yoheidy (Yoyo) Feliz ‘26, who engraved wood slabs to make signs for visitors going through the virtual exhibit tour at the Stockbridge-Munsee Tribe’s exhibit in Stockbridge. Those wood slabs were sourced from Hopkins Memorial Forest which is also where our project’s journey began!

The sugar maple that provided logs for the signage

The sugar maple that provided logs for the signage

We received Sugar Maple logs claimed from the old grove across from the sugar shack with the support of Josh Sandler, Interim Hopkins Forest Manager. This tree fell two years ago, and had not yet been repurposed; the tree was part of the maple sugar grove that has a long history of being used for maple sugaring in Hopkins Memorial Forest. The logs were harvested with the help of a chainsaw by caretaker Javi Jenkins-Soresnen ‘25 who has a lot of experience in forestry.   

Logs into Lumber 

Sam Samuel '26 creating a temporary sled guide to saw logs into planks with bandsaw

Sam Samuel ’26 creating a temporary sled guide to saw logs into planks with bandsaw

Once we received the logs, we had a series of sessions in the Williams Hopper Science Shop with Makerspace Program Manager David Keiser-Clark and Instrumentation Engineer Jason Mativi. Our goal was to mill the logs into 35 planks measuring 4″x20″ with approximately a 1″ thickness. We purchased cedar posts—that had formerly been telephone polls—locally from the Eagle Lumber sawmill in Stamford, VT. In the end, we were able to create exactly 37 planks, leaving us with precious little room for error.                

Given the unevenness of the natural logs received, the first step was to build a sled (a platform) that would stabilize each log as we sliced them into planks with the bandsaw. We affixed each log to the sled with a couple screws (carefully avoiding the path of the bandsaw blade), sliced to create a flat side, then rotated the log 90 degrees and sliced again. After making two contiguous flat sides, we were able to slice the log more conveniently by using the bandsaw fence and tabletop. 

Completed lumber that was then left to dry for a week.

Completed lumber that was then left to dry for a week.

After cutting each plank, we let them dry for a week; this allowed them to shrink and to cup or curl (warp) a week. Before drying, the maple measured between 8 to 20% moisture content. Typically when letting wood dry, you want to stack your lumber with spacers to allow air flow to all sides, and allow it to dry for six months or more. Because we were short on time, we used spacers and placed weights on top of the stacks, hoping to aid them in drying flat. After a week of drying, we were able to visually see shrinkage and some warping. 

We then used the wood jointer to create one flat edge; this process created a nearly perfectly flat and square edge that was perpendicular to the wider section of the board. We then placed that flat edge against the fence of the table saw to create a second clean edge parallel to the jointed edge. We used the jointer again to create a nearly perfectly flat surface on the wide side of the board. Next we used the thickness planer to flatten the top face of the plank and be parallel with the bottom face. This work resulted in creating beautiful rectangular sugar maple planks that were both parallel and square. We repeated this process for each board.

Engraving

After we had jointed, sliced, and planed the maple logs into boards, Mativi and David taught me how to use the Epilog Laser Helix engraver to make a Welcome sign, informational signs for the Rain Garden, Solar Meadow, and Picking Sign, and also 31 plant identification signs. It was my first time using a laser engraver and I had to be conscious about placement, size, as well as laser power and speed. Using CorelDraw (software), I centered each sign’s text to the middle of the engraver platform, which ended up being 12 inches on the x-axis and 9 inches on the y-axis. I worried endlessly about placement and sizing so I first experimented on matboard. Despite my experimentation, I still had some underlying issues given varying thickness and placements that are evident in my very first attempts at engraving. Each laser engraving requires 15 to 20 minutes, and I often repeated that process two or three times to burn a deeper image into the wood.

Plank inside of Epilog Laser Helix after one round of engraving

Plank inside of Epilog Laser Helix after one round of engraving

First batch of completed planks for plants

First batch of completed planks for plants

 

Next Steps

Sam Samuel '26 rounding corners with belt sander

Sam Samuel ’26 rounding corners with belt sander

I expect to complete laser engraving all of the signs within the next two weeks. The next step will be to affix the signs onto cedar posts; Jason Mativi has already cut those into 48” lengths including a spiked tip to make it easier to drive them into the ground. The final steps will include sanding the sharp corners and adding a natural Walrus tung oil preservative to better show the grain and improve longevity. It will be exciting to see the signs all over the Envi Center gardens! 



 

Postscript (May 2, 2024)

Sam Samuel '26 with 37 laser-engraved signs for the Envi Center gardens. This project was sourced from three 24" sugar maple logs from a fallen tree in Hopkins Memorial Forest,

Sam Samuel ’26 with 37 laser-engraved signs for the Envi Center gardens. This project was sourced from three 24″ sugar maple logs from a fallen tree in Hopkins Memorial Forest,

Postscript (June 21, 2024)

Laser-engraved signs installed in the Envi Center gardens

Laser-engraved signs installed in the Envi Center gardens

Laser-engraved signs installed in the Envi Center gardens

Laser-engraved signs installed in the Envi Center gardens

Makerspace Collaborating on Multiple Sustainability Projects

Last spring semester, the Makerspace @ Williams College pivoted to focus on academic projects that support teaching and learning goals; previously, this focus had been an aspirational goal. The Makerspace Program Manager, David Keiser-Clark, and his team of amazing student workers, now support a dozen interdisciplinary academic and campus projects at a time. A quarter of these projects support sustainability, or specifically the Zero Waste Action Plan, including: (1) a three-college collaboration to create an eco-friendly deterrent for Japanese Beetles in our community garden; (2) a prototype to upcycle plastic bottles into 3D printer filament; and (3) a set of laser engraved wood signs, sustainably harvested from Hopkins Forest, for a Stockbridge-Munsee led garden video and audio tour at the Mission House in Stockbridge, MA. Below, you’ll find a brief spotlight on each project, and possible ways we might build on these initial efforts.

E4 Bug Off Team Project : Mitigating Japanese Beetle Damage

E4 Bug Off Team Project, installed in the Williams College Community Garden : Mitigating Japanese Beetle Damage

E4 Bug Off Team Project, installed in the Williams College Community Garden

The E4 Bug Off Team is a collaborative environmental project between engineering students from Harvey Mudd and Pomona Colleges, and students working with the Williams College Makerspace and Zilkha Center. The engineering students researched and developed a prototype that would safely repel Japanese beetles to hopefully stop them from defoliating raspberry bushes in the Williams College Community Garden. The Makerspace used 3D printers to create the parts and subsequently assembled the model. Zilkha Center interns then deployed the model in the gardens. The device is designed to be low-maintenance and only needs the reservoir filled weekly with 100% peppermint essential oil. Japanese beetles, in addition to other bugs and mammals, dislike the smell of the mint family, and the concentrated peppermint essential oil diffuses into the air via permeable wicks that extend from the reservoir tank.

One of five engineering diagrams from the 30-page E4 Bug Off Team Project.

One of five engineering diagrams from the 30-page E4 Bug Off Team Project.

The initial model was installed in the garden in July 2022, at the tail end of the raspberry season, and immediately leaked. This spring (2023), the Makerspace re-printed the reservoir tank with a higher density (50% solid as compared to 15%), tested the model and, after 24 hours, found it to be 100% water-tight. This second model was introduced into the garden with mixed results: the functional model performs as intended, but the impact is difficult to measure without a control plot or method of measuring beetle activity this year. 

In addition to recording measurements of a control plot, additional steps to increase effectiveness could include fabricating additional models to better saturate the air within the berry patch or returning the project to the engineering team for design modifications. The final version would be printed with ASA filament, which is physically stronger and UV/moisture resistant, as compared to PLA or ABS filaments.

To learn more about this project, read this blog post by Makerspace student worker Leah Williams.

Contributors: Harvey Mudd College (Students: Javier Perez, Linna Cubbage, Eli Schwarz, Stephanie Huang; Professors Steven Santana and TJ Tsai), Pomona College (Student: Betsy Ding), Zilkha Center (Students: Martha Carlson, Evan Chester, Sabrina Antrosio; Staff: Tanja Srebotnjak, Mike Evans, Christine Seibert) and Makerspace (Student: Leah Williams; Staff: David Keiser-Clark)

Polyformer: Sustainable 3D Printing at Williams College

While completing a month-long Zero Waste Internship at the Zilkha Center (through the ’68 Career Center’s career exploration Winter Study course), Camily Hidalgo pitched building a machine to convert waste plastic into usable 3D printer filament. The project aligns with the Williams College Zero Waste Action Plan, which is based on the sustainability strategy in the Williams College Strategic Plan. She envisioned this as being a collaborative effort between the Williams College Zilkha Center and the Makerspace. 

After researching several options, she selected the Polyformer because it is an open-source (publicly accessible) project that seeks to create a DIY kit, composed of standard and commonly found parts, able to convert and upcycle plastic bottles (waste) into usable 3D printer filament. This project was launched in May 2022 and has quickly amassed more than 4,000 people who follow and/or contribute to the project (on Discord), while a core group of dedicated volunteers develop the project.

Many of the 78 printed parts that will be assembled into the Polyformer.

Many of the 78 printed parts that will be assembled into the Polyformer.

The intended outcome is to build a machine, based on standardized specifications, that effectively slices a water bottle into a half-inch wide ribbon, and then feeds that ribbon through a heated funnel, called a hot-end, to extrude it as 1.75mm PET filament. Camily seeks to create a working prototype to demonstrate our ability to disrupt our plastic waste stream and upcycle that into usable 3D printer filament. Approximately 40 bottles are required to create a standard 1 kg roll of filament, (enough to print 6 of the aforementioned beetle devices!). This project seeks to raise awareness that we can both reduce the quantity of waste that the college ships offsite while using that waste to create new filament and thereby purchase less of that virgin material from China. Upcycling waste can reduce the environmental impacts associated with the extraction of raw materials and product manufacturing as well as the significant carbon footprint associated with shipping those products to us from the other side of the globe.

Polyformer diagram for building the "Right Arm Drive Unit Subassembly."

Polyformer diagram for building the “Right Arm Drive Unit Subassembly.”

Camily Hidalgo notes that this project is complicated because the design is constantly being improved. Additionally, it requires 3D printing 78 individual parts and then assembling those with a kit of sourced materials that includes a circuit board, LCD screen, a volcano heater block and 0.4 mm hot end, a stepper motor, stainless steel tubing, bearings, neodymium magnets, lots of wires, and lots of metal fasteners.

This project began last spring semester and, as of this summer, all 78 parts have been locally printed. Assembly has begun, and will be completed during the fall semester, followed by actual testing under a science lab exhaust hood to safely capture antimony, a VOC released when PET reaches its melting point. 

To learn more about this project, read this blog post by Makerspace student worker Camily Hidalgo.

Contributors: Zilkha Center (Student: Camily Hidalgo; Staff: Tanja Srebotnjak, Mike Evans, Christine Seibert), Makerspace (Students: Camily Hidalgo, Milton Vento; Staff: David Keiser-Clark), Chemistry (Professors: Chris and Sarah Goh; Staff: Gisela Demant, Jay Racela)

Laser Engraving: Stockbridge-Munsee Garden Video and Audio Tour

Yoheidy Feliz connecting a red maple slab to a slanted locust base, with dowels and wood glue.

Yoheidy Feliz connecting a red maple slab to a slanted locust base, with dowels and wood glue.

The Stockbridge-Munsee Community Historic Preservation Office summer intern, Yoheidy Feliz, reached out to the Zilkha Center for help with creating locally sourced wooden signs for a permanent video and audio tour at the Stockbridge-Munsee Garden in Stockbridge, MA. She received a dozen sugar maple and red maple discs, plus locust wedges, all sustainably harvested from already fallen trees in the Williams College Hopkins Forest. 

Yoheidy approached the Makerspace and, in collaboration with expertise and tools from the Science Shop, learned how to use an industrial laser engraving machine to etch a welcome sign with QR code, as well as multiple audio guide messages, onto sanded wooden discs. She attached these discs to sloped wooden bases (“wedges”) using woodworking dowel joinery, wood glue and a mallet, and then applied a natural, non-toxic preservative coating of Walrus-brand tung oil. 

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

Yoheidy sits with her series of laser engraved wood slabs. She later added a laser engraved metal QR code label that directs users to the hosted video tour.

The day after completing all of this work, she installed these at the Mission House garden, and then created these stunning video and audio tours to guide local and remote viewers through the gardens.  

To learn more about this project, please be on the lookout for an upcoming Makerspace guest blog post by Yoheidy Feliz.
Contributors: Stockbridge-Munsee Community Historic Preservation Office (Staff: Bonney Hartley, Historic Preservation Manager; Student: Yoheidy Feliz), Science Shop (Staff: Jason Mativi, Michael Taylor), CES & Zilkha Center (Staff: Drew Jones, Christine Seibert), Makerspace (Staff: David Keiser-Clark)

Cloning the Last of its Kind

Milton Vento ‘26 using photogrammetry to create a 3D object

Milton Vento ‘26 using photogrammetry to create a 3D object

Most recently, Associate Professor of German, Chris Koné, approached the Makerspace with a problem: all but one of the file hanging clips to his beloved office desk had broken. The result: piles of overflowing manila folders surrounding his desk, cramping his office and style. He searched Ebay, Etsy, and Amazon, but was unable to find replacement parts. He even visited a store in NYC that specializes in providing office parts. Alas, the parts were obsolete. So he approached the Makerspace and asked if we might be able to replicate his last remaining viable part.

Milton Vento and Chris Koné hold the original and cloned objects.

Milton Vento and Chris Koné hold the original and cloned objects.

Milton Vento, the Makerspace’s summer student worker, took on the task as his first project, using it as an opportunity to learn photogrammetry, an accessible and low-cost method of taking many photographs of an object from varying angles and then using software to stitch them together into a 3D digital object. He expanded the project by testing four different methods of creating 3D objects using: standard manual DSLR photogrammetry with Metashape software; photogrammetry using a smart turntable that rotates and sends an infrared signal to the DSLR camera, causing it to iteratively release the shutter and then advance the turntable several degrees and then repeat that process; an older DAVID5 object scanner; and the RealityScan app that requires only a smartphone. This exploration resulted in two distinctly more efficient workflows that will become standard use this fall in the Makerspace. 

He also successfully re-created a 3D object of the final remaining desk part, and printed and delivered a half dozen of these parts to Chris. Should any of these ever break, the file can easily be retrieved and re-printed. 
Contributors: German Department (Professor: Chris Koné), Makerspace (Staff: David Keiser-Clark, Student: Milton Vento)

Future Project Ideas

One upcoming and likely collaboration between the Makerspace and the Zilkha Center would be to laser etch additional sustainably-harvested Hopkins Forest wood slices to create signs for the Williams College Community Garden. Additionally, the Zilkha Center, Makerspace and MCLA Physics and Environmental Center may brainstorm the possibility of creating a larger prototype for upcycling plastic into pellets. The pellets could then be used for injection molding, given to local artists for artwork, or sold regionally; this idea was sparked by Smith College’s collaboration with Precious Plastics


You can find this blogpost and other sustainability projects at sustainability.williams.edu.

From Teeth to Time: Discovering Siwalik Hills’ Past Through Archaeology

How did we get here? Where do we come from? What does our future encompass? As an aspiring scientist, I have always been fascinated by these (and many more!) questions about the evolution of humanity and the cosmos. Specifically, the modern ways in which experts around the world are working towards finding a unifying, concrete answer about the theory of evolution and dispersal of early humans. To my pleasant surprise, scientists at Williams College are making wonderful discoveries and progress on this topic, and I was able to contribute — even just a tiny bit — to some of their work this semester!

Some Background

Anubhav Preet Kaur

Anubhav Preet Kaur pictured working at the ESR Lab at Williams College

Scientists believe that early humans dispersed throughout the world because of changing global climates. The specific routes that these early humans took are still inconclusive. However, there are several hypotheses about the possible areas they inhabited, given early Pleistocene evidence of hominin occupation in those areas. Thus, the hypothesis I will explore in this blog post will be related to the pieces of evidence of hominin occupation from regions around the Indian subcontinent: i.e., Dmanisi, Nihewan, and Ubeidiya—just to name a few sites.

One of the supporters of this hypothesis is Anubhav Preet Kaur, an archeologist conducting a paleoanthropological research project that seeks to identify if the Siwalik Hills in India were a likely dispersal path for early humans. As Anubhav states: “The fossils of Homo erectus, one of the first known early human species to disperse outside of Africa, have been discovered from Early Pleistocene deposits of East Europe, West Asia, and Southeast Asia, thereby placing Indian Subcontinent in general—and the Siwalik Hills, in particular—-as an important dispersal route.” The problem is that no fossil hominin remains or evidence attributed to any early hominin occupation have ever been uncovered in that area. Thus, her project seeks to paint a clearer prehistorical picture of the region’s ecology by precisely dating faunal remains from her dig sites. She hopes to indicate if the Siwalik Hills, already famous for yielding many paleontological and archeological finds over the past hundred-plus years, would have had fauna and ecological conditions during these migratory time periods that would have supported early humans. And precisely dating these faunal remains requires the skills of Dr. Anne Skinner, a renowned lecturer at Williams College. 

Anne is a distinguished Williams College emerita chemistry faculty member who is an expert in electron spin resonance (ESR) and specializes in applying ESR techniques to study geological and archaeological materials. Anubhav is a Smithsonian Institute Predoctoral Fellow and presently a doctoral student at the Indian Institute of Science Education and Research in Mohali, India. Anubhav spent three seasons, between 2020-2022, doing paleontological field surveys and geological excavations at the Siwalik Hills region in India. She led a team of undergraduate and graduate field assistants and volunteers in searching for clues that might indicate if the conditions were suitable for hominins. Ultimately, she brought a selection of her fossils to Williamstown, MA, so that Anne could begin to teach her the process of utilizing ESR to date her objects. 

What is ESR?

ESR is a technique used on non-hominin remains that allow scientists to measure the amount of radiation damage a buried object—in this case, several partial sets of animal teeth—has received to provide insights into its geological and biological history. The Siwalik Hills region is a particularly important site for archaeologists because they are home to a variety of rich deposits of fossil remains that date back from the Miocene to Pleistocene periods; however, Anubhav’s sites in particular, contain remains from the Pliocene and Pleistocene. The periods that she studied in her site are relevant as those are the periods in which she theorizes a dispersal could have happened, making the study of the remains more effective. The region is located in the northern part of India (near the border of Pakistan) and covers an area of about 2,400 square kilometers. The fossils Anubhav and her team collected (~0.63-2.58 Myr) include the remains of Pleistocene mammals, such as bovids, porcupines, deer, and elephants. Thus, they and have been used as a tool for archaeologists to learn more about the region’s past climate and ecology

The Story Starts Here

On January 9, 2023, Anne and Anubhav visited the Williams College Makerspace and inquired if we could create high-quality 3D models that would persist as a permanent scientific record for four sets of Pleistocene mammalian teeth that would soon be destroyed as a consequence of ESR dating. Electron spin resonance is currently the most highly specific form of dating for objects up to 2 Mya, and is used only with animal remains as the dating process requires crushing the material into powder in order to analyze with highly sensitive equipment. Hominin remains are widely considered too rare and valuable to allow destructive dating, while animal remains are relatively more frequent. Creating high-quality 3D objects allows researchers with a means to effectively consult and do further research on a digital reconstruction of the model at a future date. In addition, the 3D objects are the basis for creating 3D prints of the object for physical study and handling. 

Furthermore, ESR is a rare and expensive technique that is only available at a limited number of sites throughout Australia, Japan, Brazil, Spain, France, and the United States. Williams College is, in fact, the only facility in all of North America with ESR equipment, and Anne is the only ESR specialist at Williams. 

My Job

This spring, I collaborated on this 3D modeling project with David Keiser-Clark, the Makerspace Program Manager. We divided the job so that each of us was in charge of producing two unique 3D models of the highest quality. We began the project by holding a kickoff meeting with Anubhav and Anne to discuss project needs and to receive four sets of prehistoric teeth. Throughout the project, we held additional meetings to discuss progress and, finally, to present finished 3D digital and printed models. Despite the fact that this was my first photogrammetry assignment, I embraced the challenge head-on, working autonomously and engaging with stakeholders whenever necessary.

To build the 3D models, I used a photographic method known as photogrammetry. This required putting together many orbits of images using software to create a three-dimensional object. I participated in two workshops offered by Beth Fischer, Assistant Curator of Digital Learning and Research at the Williams College Museum of Art, to develop knowledge of this procedure. Her thorough understanding of the intricate workings of our photogrammetry software, Agisoft Metashape, was incredibly helpful. Beth was a great resource and was willing to meet with us numerous times. Moreover, I shared what I learned with David (and the entire Makerspace team) so that we could update the Makerspace’s new documentation on photogrammetry. By sharing my experiences, I helped to guarantee that the documentation addressed a wide range of challenging edge-case scenarios and would serve as a thorough and useful reference for future student workers.

Here is a walkthrough of the photogrammetry process:

Taking the Pictures

Valeria and David took an average of 341 pictures for each of the four sets of teeth (a total of 1,365 photographs).

Valeria and David took an average of 341 pictures for each of the four sets of teeth (a total of 1,365 photographs).

I collaborated with David to take clear images from every aspect and dimension. We took a hands-on approach, testing different angles and lighting settings to look for the best approach to photograph each tooth. I first relied on natural lighting and a plain background. After a couple of runs, however, David pushed the concept to the next level by adding a photography lightbox, which allowed us to shoot higher-quality photographs with bright lighting and without shadows. These photos served as the foundation for subsequent work with the photogrammetry software.

 

 

Meeting with Anubhav

Valeria interviewed Anubhav Preet Kaur before starting the 3D model process.

Valeria interviewed Anubhav Preet Kaur before starting the 3D model process.

I wanted to know more about the scope of the project and what function my contribution might provide. In order to have a better understanding of the scientific process, I interviewed Anubhav, whose important insight provided light on the significance of her research within the larger scientific field. This interaction helped me understand the purpose of the 3D models I was making, especially given the impending pulverization of the teeth via the ESR process. Furthermore, it emphasized the critical need to have an accurate digital 3D model, as well as a physical model, that would endure beyond the impending destruction of the original objects.

Using Photoshop to Create Masks: What is a Mask?

Valeria encountered several challenges when importing masks. However, Beth supported her in her journey, and they overcame those obstacles together.

Valeria encountered several challenges when importing masks. However, Beth supported her in her journey, and they overcame those obstacles together.

Masks play a crucial role in the model-building process in Agisoft Metashape as they provide precise control over the specific portions of an image used for generating the model. This level of control ensures the resulting reconstruction is accurate and detailed by eliminating irrelevant or problematic features. I used Adobe Photoshop to create masks for each set of teeth, and this proved to be one of the most challenging aspects of the entire project. Because the sets of photos had varying angles and lighting conditions, I collaborated with Beth Fischer to troubleshoot and overcome these obstacles. This collaborative effort deepened David’s and my own understanding of the process. This enabled him to document the issues I faced and their corresponding solutions for future students. After approximately one month of persistent trial and error and several meetings with Beth, we successfully identified effective solutions to the encountered problems.

 

Using Metashape to Create the 3D Model

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

When you use Metashape, it starts by scanning each image and looking for specific points that stand out, like a small group of dark pixels in a larger area of light pixels. These distinctive points are called “key points,” and the software only searches for them in the unmasked areas of the image. Once it finds these key points, Metashape starts to match them across multiple images. If it succeeds in finding matches, these points become “tie points.” If enough points are found between two images, the software links those images together. Thus, many tie points are called a “sparse point cloud.” These tie points anchor each image’s spatial orientation to the other images in the dataset—it’s a bit like using trigonometry to connect the images via known points. Since Metashape knows the relative positions of multiple tie points in a given image, it can calculate an image’s precise placement relative to the rest of the object. After that process, I made the model even more accurate by using “gradual selection” to refine the accuracy of the sparse point cloud, and then I “optimized cameras” to remove any uncertain points (yay!). 

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Later on, I moved on to building the “dense cloud.” This process utilizes the position of the photos previously captured to build a refined sparse cloud. Metashape builds the dense cloud by generating new points that represent the contours of the object. The resultant dense point cloud is a representation of the object made up of millions of tiny colored dots, resembling the object itself. I then cleaned the dense cloud to further refine it by removing any noise or uncertain points.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Using Agisoft Metashape to construct the 3D Model by importing the photographs and generated masks.

Now it was time to build the geometry! This is what turns the point cloud into a solid, printable surface. Through this process, Metashape connects the dots by forming triangular polygons called “faces.” The more faces the model has, the more detailed it will be (it also uses more memory!). As a point of comparison, early 3D animations often appeared to be blocky objects with visible facets, and that was because those models had low face counts. High face counts offer greater refinement and realism.

Lastly, I textured the model. Metashape uses dense cloud points to identify the color of each spot on the model. Texturing the model offers further realism because it applies the actual colors of the object (as photographed) to the resultant 3D model. 

And that’s the general process I followed to turn a set of images into a high-quality 3D object using Metashape!

Printing the Model

We used calipers and recorded those measurements for later use with accurately scaling the digital object.

We used calipers and recorded those measurements for later use with accurately scaling the digital object.

To print the final 3D model of the set of teeth, Beth and David worked on scaling it in Metashape. Earlier in the project, David had measured each set of teeth with calipers and recorded metric measurements. Then, Beth marked the endpoints of two sets of David’s measurements and set the length between them. Based on those known measurements, Metashape was then able to figure out the proportionate size of the rest of the model to within 0.1 mm.

 

Valeria and David began printing a rough draft of how the models will look once the materials are set. 

Valeria and David began printing a rough draft of how the models will look once the materials are set. 

Valeria and David completed printing a rough draft to verify that the size is accurate.

Valeria and David completed printing a rough draft to verify that the size is accurate.

Next Steps

The final steps, which are scheduled to take place this summer, will be to:

  • Clean up the file structure of the four digital projects in preparation for permanent archiving in the college library;
  • Send the final digital files to Anubhav Preet Kaur in India; we will include .stl files so that she may 3D print her models locally.

Post Script (Feb 23, 2024)

We have completed and shared all four photogrammetry projects with Anubhav Preet Kaur. Each project includes the following:

  • All original photos
  • Final Metashape digital 3D photogrammetry objects, including texturing
  • A .stl and .3mf file, each of which can be used to 3D print the digital object
  • Each project also includes a README text file that offers an overview of the project

We hope to add these 3D objects to this post later this year as rotatable, zoomable objects that can be viewed from all angles.

Sources

  1. Chauhan, Parth. (2022). Chrono-contextual issues at open-air Pleistocene vertebrate fossil sites of central and peninsular India and implications for Indian paleoanthropology. Geological Society, London, Special Publications. 515. 10.1144/SP515-2021-29. https://www.researchgate.net/publication/362424930_Chrono-contextual_issues_at_open-air_Pleistocene_vertebrate_fossil_sites_of_central_and_peninsular_India_and_implications_for_Indian_paleoanthropology
  2. Estes, R. (2023, June 8). bovid. Encyclopedia Britannica. https://www.britannica.com/animal/bovid
  3. Grun, R., Shackleton, N. J., & Deacon, H. J. (n.d.). Electron-spin-resonance dating of tooth enamel from Klasies River mouth … The University of Chicago Press Journals. https://www.journals.uchicago.edu/doi/abs/10.1086/203866 
  4. Lopez, V., & Kaur, A. P. (2023, February 11). Interview with Anubhav. personal. 
  5. Wikimedia Foundation. (2023, June 1). Geologic time scale. Wikipedia. https://en.wikipedia.org/wiki/Geologic_time_scale#Table_of_geologic_time 
  6. Williams College. (n.d.). Anne Skinner. Williams College Chemistry. https://chemistry.williams.edu/profile/askinner/ 
  7. Agisoft. (2022, November 4). Working with masks : Helpdesk Portal. Helpdesk Portal. Retrieved June 16, 2023, from https://agisoft.freshdesk.com/support/solutions/articles/31000153479-working-with-masks
  8. Hominin | Definition, Characteristics, & Family Tree | Britannica. (2023, June 9). Encyclopedia Britannica. Retrieved June 16, 2023, from https://www.britannica.com/topic/hominin

Sustainable 3D Printing at Williams College (Part 1)

Introduction

The Polyformer: upcycle bottle waste to 3D printer filament

The Polyformer: upcycle bottle waste to 3D printer filament

The massive amount of plastic bottles incinerated or dumped in landfills or oceans is a growing global concern. In the United States alone, despite recycling efforts, 22 billion plastic bottles are incorrectly disposed of each year. It is evident that our current recycling strategy has been falling short for the past 60 years, and it gives us false confidence to continue our plastic-dependent lifestyle. In response to this urgent problem, Williams College, through a collaboration between the Makerspace and Zilkha Center for Environmental Initiatives, has embarked on an innovative sustainable 3D-printing project that seeks to upcycle plastic bottles into 3D print filament. 

Recycling Methods: Ineffectual at Best and Deceptive at Worst

The current state of plastic waste recycling presents significant challenges and limitations. Recent statistics highlight the large scale of this issue as well as the urgent need to seek innovative and improved solutions.  The United States, for example, generated approximately 40 million tons of plastic waste in 2021, of which only 5-6% (two million tons) were recycled, far below previous estimates. Moreover, between 2019 and 2020, there was a 5.7% global decrease in plastics recovered for recycling, resulting in a net decrease of 290 million pounds. These statistics indicate a concerning downward trend in plastic recycling efforts. 

The annual global production of approximately 400 million tons of plastic waste adds to the growing environmental crisis. Import bans by countries like China and Turkey have hindered recycling efforts, as the United States previously relied on outsourcing a significant portion of its plastic waste for recycling. The inherent challenges of plastic recycling, such as its degradation in quality with repeated recycling, make it less suitable for circular recycling processes. In the United States, the total bottle recycling rate has declined, with 2.5 million plastic bottles discarded every hour. Similarly, the global accumulation of plastic waste in oceans, estimated to be between 75 and 199 million tons, poses a severe threat to marine life and ecosystems, and the long degradation time of plastic bottles, which can take over 450 years, adds to the concern.

These statistics emphasize the pressing need to address the limitations of Polyethylene terephthalate (PET) plastic recycling. Relying solely on conventional recycling methods is inadequate to tackle the magnitude of the problem. Innovative approaches, such as upcycling, are crucial for effectively reducing plastic waste and minimizing our environmental impact. By finding alternative uses for plastic materials, we can break free from the limitations of circular recycling processes and make a significant change in helping eradicate the plastic waste crisis.

Myths, Pros, and Cons of Recycling and Upcycling

Recycling: Despite its benefits, the reality is that after being collected and aggregated, much of the recycled content is stored in unsafe locations until it overflows and is eventually landfilled or burned. Recent incidents, such as a recycling center fire in Richmond, Indiana, highlight the dangers, inefficiencies, and serious consequences of the current recycling system. 

In addition, when plastics are recycled, their potential recyclability is subsequently decreased. PET is classified as grade 1 plastic due to its high recycling potential. However, once it is recycled, it downgrades to the 7th grade, which is no longer recyclable. For this reason, at the Williams Makerspace, we decided to implement the strategy of upcycling that aims to repurpose PET plastic instead of recycling it to provide longer durability. 

Upcycling: Upcycling offers an alternative approach by diverting items from the waste stream and enabling their reuse. While upcycling may not restore plastic to its original grade, it provides a longer second life for the material before it becomes waste once again. Upcycling is the practice of transforming a disposable object into one of greater value. Therefore, upcycling contrasts the idea that an object has no value once disposed of or must be destroyed before reentering a new circle of production and value creation. 

The Polyformer Prototype and Its Value

The Polyformer is a sustainable 3D printing project that aims to convert PET plastic bottles into 3D printer filament. For the purposes of this project, the filament will initially be used to produce 3D-printed plant pots and compost bins for the Zilkha Center, effectively converting waste into items that can be utilized on a day-to-day basis. This process could reduce the purchase of virgin plastic objects (i.e., pots and bins), reducing carbon-related shipping emissions and reducing waste generated by single-use plastics. This project aims to explore the environmental impact of repurposing on-site waste into products needed on campus. Additionally, this project offers a prototype for developing locally-sourced 3D printer filament, which would reduce our dependence on purchasing virgin filament that is typically sourced from other countries, such as China, and bears a carbon footprint. The project’s goals include providing an educational opportunity for the students to engage in environmental activism by repurposing single-use plastic bottles into 3D filament and useful objects for the Williams College community. 

The Polyformer is an open-source project with over 4,000 Discord members. It is a prototype and has pain points, such as that the bottles require manual cleaning, individual manual placement onto the machine, and any impurities that can cause the filament to fail (break or clog) in the 3D printer. The Polyformer community is actively addressing these issues, and while solutions do not yet exist, this is an exciting project that offers an opportunity to disrupt the stream of plastic waste.

Project Goals and Alignment with Williams’ Strategic Objectives

The project’s goals align with the Williams College Zero Waste Action Plan, which builds upon the sustainability strategy in the college’s strategic plan, focusing on three of its goals. Firstly, it offers an educational opportunity for students to engage in environmental activism and learn about upcycling as a solution to plastic waste. Secondly, the project promotes sustainability by reducing waste and carbon emissions associated with single-use plastics. Thirdly, it reinforces Williams College’s commitment to local engagement and community impact by providing practical and sustainable solutions to address environmental challenges.

Building the Polyformer

Polyformer: Parts View

Polyformer: Parts View

The Polyformer is a tool that will allow Makerspace student workers to manually automate cutting a water bottle into a long, consistent ribbon that feeds into a repurposed 3D printer hot end, converting it into a standard 1.75 mm filament. Building a Polyformer requires 3D printing 78 individual parts and then assembling those with a Bill of Materials (BOM) that can be sourced individually or purchased as a kit. This acquired kit includes a circuit board, LCD screen, a volcano heater block and 0.4 mm hot end, a stepper motor, stainless steel tubing, bearings, neodymium magnets, lots of wires, and a box of metal fasteners. 

We have printed all 78 parts, and my fellow Makerspace student workers have been instrumental in helping to complete that process. The next stage, which I plan to begin this summer, is assembling and testing the Polyformer to transform the plastic bottles into 3D-printer filament. 

Polyformer as a Disruptor

This project aims to disrupt our plastic-centric world in several ways. By repurposing plastic bottles into valuable filament, it challenges the notion that disposables have no value once discarded. Furthermore, it reduces dependence on external filament sources and contributes to a more self-sufficient and sustainable production cycle.

Polyformer: Next Steps

Polyformer assembly

Polyformer assembly

The project is currently in the prototyping phase, and this summer, I hope to begin assembling the Polyformer and, subsequently, testing it under a science lab hood. We will use a hood to vent the area because the process of melting PET/G ribbon, from the bottles, into filament releases antimony – a suspected carcinogen — and other volatile organic compounds (VOCs). When our Polyformer works as expected, students will

then volunteer to collect approximately 200 plastic bottles (a standard 1 kg roll of filament requires approximately 40 bottles) to manufacture sufficient filament to produce the four large plant pots and 22 compost bins. The pots and bins will be provided to Zilkha Center gardening interns and the Sustainable Living Community at the College, serving as practical examples of upcycling in action.

Conclusion

The sustainable 3D printing project at Williams College represents a powerful initiative to combat plastic waste through upcycling. By repurposing plastic bottles into valuable filament and creating sustainable products, the project aligns with Williams’ commitment to environmental stewardship and community engagement. Through innovative approaches like this, we can work towards a future with reduced plastic waste, increased sustainability, and a more conscious approach to consumption.

References

  1. USA Plastic Bottles Pollution: https://www.container-recycling.org/assets/pdfs/media/2006-5-WMW-DownDrain.pdf
  2. Plastic Pollution as a Global Issue: https://www.sciencedirect.com/science/article/pii/S0304389421018537 https://education.nationalgeographic.org/resource/one-bottle-time/
  3. The evolution and current situation of Plastic Pollution: https://www.sciencedirect.com/science/article/abs/pii/S0025326X22001114
  4. What is Upcycling?: https://www.researchgate.net/publication/303466628_Upcycling
  5. What is the Polyformer?: https://www.reiten.design/polyformer https://www.aliexpress.us/item/3256804888534268.html
  6. Recycling data: https://blog.nationalgeographic.org/2018/04/04/7-things-you-didnt-know-about-plastic-and-recycling/.
  7. Plastics Material Specific Data: https://www.epa.gov/facts-and-figures-about-materials-waste-and-recycling/plastics-material-specific-data
  8. Richmond, Indiana Recycling Plant Fire: https://www.nytimes.com/2023/04/12/us/richmond-indiana-recycling-plant-fire.html
  9. Williams College Strategic Plan and Zero Waste Action Plan: https://sustainability.williams.edu/waste/zero-waste-action-plan/ https://president.williams.edu/strategic-plan-2021/

 

Spinning Tales: My Whimsical Adventure in Arduino Turntable Wonderland

Arduino turntable prototype (close up of gear)

Arduino turntable prototype (close up of gear)

I remember the day I first laid eyes on that clunky, awkward, yet fascinating automated burrito-making machine in the local toy store. It was love at first sight! I knew I had to make it mine, but alas, my piggy bank held only a handful of nickels and a couple of lint balls. Little did I know that my passion for robotics would lead me to a journey full of laughter, tears, and making the lives of hundreds of passionate photogrammetry hobbyists like me easier by creating an affordable DIY Arduino turntable.

Fast forward to 2023, where I found myself rotating an 80 thousand-year-old cave bear tooth by one degree increments and taking 600 pictures, all with just 2 hands (which took me 4 hours and gave me 2 days of back pain) in our college Makerspace. I found myself daydreaming about the kind of robot I would create if only I had the skills of Tony Stark. And then, soon afterward, while I was surfing the internet on how to make photogrammetry pictures better optimized for 3D scanning, I stumbled upon a YouTube photogrammetry tutorial and found out that there was a ”thing” called “turntables.” To my sadness, it cost $150. And that was my light-bulb moment. I thought, “Why not give it a try?” As I saw my Makerspace friends clumsily rotate a plastic hangman for 3D scanning, I had an epiphany – what if I built an AFFORDABLE automatic turntable to do the job for us?

Arduino turntable prototype (base, rotator, gear, spindle)

Arduino turntable prototype (base, rotator, gear, spindle)

With the enthusiasm of a mad scientist, I proposed the idea to David, our Makerspace Program Manager and he immediately approved the idea and sent me a couple of resources to start with (thanks, David, for being so supportive). I dove headfirst into the world of turntables that people had previously made. I found Adrian Glasser–a professional computer scientist and a consultant–who had already made an almost similar prototype I was planning to make. Although Adrian’s project was pretty cool, it needed fancy components which were relatively expensive. I also found Brian Brocken, a passionate maker and 3D printer, whose turntable project stood out and inspired me a lot in the design of my prototype. While these works were a great sense of inspiration, my mind was lingering around the question of “how to make the design and features more efficient while keeping the device affordable and easy to build.”

The journey was fraught with challenges and unexpected twists, but I was determined to build the most magnificent, borderline-overengineered turntable the world had ever seen (just kidding!). I worked iteratively, and my first draft was a very basic model so that I could feel it with my hands and think about the build process I 3D printed a PLA (a type of 3D printing filament) base, a rotating platform , and some gears and bearings. After researching different approaches, I ordered my first set of electronic components and kept the total cost below $60 for this first version.

Arduino circuit board and LCD screen

Arduino circuit board and LCD screen

I decided to go with Arduino Uno, a very easy-to-program and flexible microcontroller that will be  the brains of my device. “Easy to build for everyone” was lingering in my mind when I chose the components. I got a stepper motor – which provides incremental motion, compared to a DC motor that provides a continuous motion – coupled with a physical motor driver to enable precise and sequential one-degree rotations with a super-low margin of error. To make the turntable more user-friendly, I added a simple LCD display and a rotary encoder for adjusting the rotation speed. After two weeks of assembly and testing, I had a fully functional circuit. 

Now it’s time to code! The hardest part while coding was finding the library file on the internet that corresponded to my particular stepper motor. It took me 4 hours just to find the library and start coding! Phew…

I kept writing code for a week and then moved on to testing my code. Overcoming the challenges of building my robotic turntable was like conquering Mount Everest. I spent hours troubleshooting the Arduino code, sifting through lines of syntax until my eyes crossed. But, much like a robot phoenix, I rose from the ashes, armed with patience, persistence, and an endless supply of coffee. After a few weeks of tinkering and testing, I finally had a circuit and a working code that I marked as a BIG CHECKPOINT for the project.

The spring semester gradually came to an end, and the turntable project will take a summer vacation. But next semester, the first prototype of the turntable is going to see the bright light of the earth. 

Next Steps

  1. Using Fusion360 to design an easy-to-print downloadable 3D model (stl file) 
  2. Using Infra-Red (IR) sensors to automate the camera shutter click with each one-degree rotation of the turntable, so that our Makerspace friends can leave the automated turntable working (extra hours!) overnight **insert cruel laugh**
  3. Sharing the technical details and building process online to make it accessible to other Makerspace groups and hobbyists around the world. This can be done through posting a follow-up blog with all the technical details. For example, I hope to publish step-by-step instructions, along with the final list of parts (with URLs), my custom Arduino code, link to the software library that corresponds to my stepper motor, and post downloadable .stl files for printing my custom 3D models to complete this project.

Affordability

I hope to keep the project affordable and my goal is for all costs to be under $70.

Conclusion

During this journey, I learned the importance of patience, collaboration, and perseverance. Building a robotic turntable from scratch is not a one-person job, and I found myself relying on the support and expertise of my fellow Makerspace friends. Together, we shared our knowledge and skills, which not only allowed me to build a better turntable but also contributed to the overall growth and development of our Makerspace community. I enlisted the help of my fellow Makerspace comrades, who offered their own unique brand of wisdom, ranging from programming tips to advice on how to make the turntable levitate. (Note: do not try to make your turntable levitate. It’s a bad idea.)

The Arduino turntable project wasn’t just about creating a cool gadget – it was about embracing my love for robotics and the creative process. In the end, I learned that a healthy dose of humor, imagination, and the willingness to make things up as you go can lead to some truly spectacular results.

Today, my beloved half-constructed Arduino turntable takes pride of place on the little yellow Makerspace table, a constant reminder of progress, the power of imagination, and the beautiful chaos that comes with it. So, dear reader, I encourage you to explore your own interests, whether that’s robotics or any other field that sparks your curiosity. Be open to surprises, maintain a sense of humor when facing challenges, and always remember that amazing innovations often start with bold ideas.

The Backbone of Art: Sculpting a Spine with 3D Printing and Plaster

Sculpting a Spine with 3D Printing and Plaster

Sculpting a Spine with 3D Printing and Plaster

This semester in Beginning Sculpture (ARTS 132), my professor Amy Podmore tasked us with creating a sculpture in response to a prompt titled “Scaffolded Fragments.” For this project, we had to “create a sculpture where a part of a figure, (or a fragment or surrogate), is supported, contained, bracketed or held by a wooden support.” We could use any material of choice, but she strongly encouraged us to use wood as the support. To answer the prompt, I started by thinking about what fragment I wanted to use. My initial impulse was to make something that featured a spine. I knew I wanted the spine to be realistic, so I began brainstorming how I could emulate the curvature of a spine and create recognizable vertebrae. First, I modeled vertebrae out of clay and made some plaster molds of those clay pieces, but I knew there had to be a better way to create these bones. 

We were given freedom with regard to materials for this project, so I tried to think outside the box for how to make realistic vertebrae quickly and easily. In my brainstorming and searching, I found a design for a 3D printed vertebrae for sale on Etsy. From there, I went to the Makerspace to discuss the logistics and if using this design would be possible. The students working there told me it was not only possible, but I didn’t even have to make the purchase and instead could peruse a library of free designs online. I found one that worked, and the undertaking began! 

Spine Sculpture project on display in the Spencer Studio Art Building

Spine Sculpture project on display in the Spencer Studio Art Building

Leah Williams led the Makerspace side of things for this project, and once she printed some vertebrae, I brought them to the sculpture studio. I made molds of the vertebrae using alginate and then poured plaster into the molds to create casts. After the casts hardened, I drilled holes in them and slid them onto a piece of steel I bent to resemble the twist of a spine. The plaster offered a smooth, matte look that the plastic couldn’t, so I decided to use the plaster casts for the spine, but I still wanted to incorporate the original 3D printed plastic vertebrae in my sculpture, so I placed them in a bird cage-like metal object resting atop the stool. Including both materials resulted in an interplay between the artificial and the natural (with the plaster representing the natural) that makes the viewer wonder about what happens when you bring the artificial into the human body.

Sculpting a Spine with 3D Printing and Plaster

Sculpting a Spine with 3D Printing and Plaster

 

One challenge with this project was the limited timeframe by which I was constrained. When we began printing, I had roughly 1-1.5 weeks to complete the project, and printing took more time than I had anticipated. But Leah was able to print enough pieces for me to make casts while more were still being printed, and we were able to make enough to fill the bird cage partway with the 3D printed pieces. In the end, I was able to bring my vision to life and incorporate both machine-made and handmade objects in my sculpture.