Sustainable 3D Printing at Williams College (Part 2)

Polyformer Updates:

Polyformer 3D printed parts and electronics ready to be assembled.

Polyformer 3D printed parts and electronics ready to be assembled.

My name is Camily Hidalgo Goncalves, and I am a sophomore at Williams College majoring in Chemistry with a Neuroscience concentration. As a Makerspace student worker, I have recruited Milton Vento ’26, Tashrique Ahmed ’26 (both Computer Science students at Williams College and fellow Makerspace student workers), and Oscar Caino ’27, a student at Swarthmore College who is a prospective Engineering major, to assist me in assembling the Polyformer parts and electronics. We have completed several milestones, and made significant progress on the Polyformer project at Williams College. This innovative project aims to upcycle waste plastic bottles into locally-sourced 3D printer filament.

Assembly and Integration

The assembled Polyformer

The assembled Polyformer

Milton, Oscar and I worked together to assemble the 78 individual 3D-printed parts required for the Polyformer. This intricate process demanded precision and teamwork. Following the assembly of the physical components, I assisted Tashrique with integrating the electronics. This included the installation of a circuit board, LCD screen, volcano heater block, stepper motor, and various sensors and wiring. These components are essential for the Polyformer to function effectively, converting plastic bottles into usable 3D printer filament. 

Collection and Processing of Plastic Bottles

Plastic bottle collection poster.

Plastic bottle collection poster.

In preparation for testing, we collected approximately 75 plastic bottles. These bottles were contributed by the Williams College community, demonstrating a collective effort to reduce plastic waste. Elena Sore ‘27, a prospective Computer Science major and Makerspace student worker, and I worked on the initial step in the processing phase, which involved us cleaning the bottles and cutting them into long, consistent ribbons. These plastic ribbons will then be fed into the Polyformer, where they will be melted and extruded into filament.

Testing and Quality Assurance

Next fall semester we will begin rigorous testing to ensure that the Polyformer operates smoothly and produces high-quality filament that meets the required standards for 3D printing. Several tests will be conducted, including:

  1. Durability Testing: Assessing the strength and flexibility of the produced filament.
  2. Consistency Testing: Ensuring the filament has a uniform diameter, which is crucial for reliable 3D printing.
  3. Compatibility Testing: Verifying that the filament performs well with various 3D printers and printing conditions, while accommodating different material thicknesses from various brands of PET bottles.

Project Goals and Benefits

The Polyformer project aligns with Williams College’s sustainability goals and offers numerous benefits:

  • Waste Reduction: By upcycling plastic bottles, we reduce the amount of plastic waste that ends up in landfills or oceans.
  • Sustainability Education: The project serves as a hands-on educational tool, teaching students about the importance of repurposing and innovative ways to repurpose waste materials.
  • Local Impact: The filament produced will be used to create practical items such as plant pots and compost bins for the Zilkha Center for Environmental Initiatives, supporting local sustainability efforts.

Next Steps

We hope to create a sustainable cycle of converting plastic waste into useful products, while minimizing the environmental impact of plastic disposal. This project provides practical solutions to plastic waste,  and also serves as an educational tool, raising awareness about sustainability and encouraging innovative thinking in environmental conservation.

As we move forward, our next steps will be to refine the process and increase the efficiency of the Polyformer:

  1. Rigorous Testing: Thoroughly test the Polyformer to ensure it produces reliable and high-quality filament that meets 3D printing standards.
  2. Scaling Up: Increase the number of collected bottles and the quantity of filament produced.
  3. Educational Workshops: Host campus workshops to educate the broader community about the Polyformer and the importance of sustainable practices. We might seek to collaborate with the Williamstown Milne Library to host a workshop for local community members.
  4. Research and Development: Continue to improve the design and functionality of the Polyformer based on feedback and test results.

Acknowledgements

Assembling the Polyformer: Oscar Caino ‘27, a Swarthmore College student (left), and Camily Hidalgo Goncalves ‘26, a Williams College student (right).

Assembling the Polyformer: Oscar Caino ‘27, a Swarthmore College student (left), and Camily Hidalgo Goncalves ‘26, a Williams College student (right).

This project would not have been possible without the ongoing support and collaboration received. We are immensely grateful to our collaborators: David Keiser-Clark (Makerspace Program Manager), Milton Vento ‘26, Tashrique Ahmed ‘26 and Elena Sore ‘27 (Makerspace Student Workers), Yvette Belleau (Lead Custodian, Facilities), Christine Seibert (Sustainability Coordinator, Zilkha Center), Mike Evans (Deputy Director, Zilkha Center for Environmental Initiatives), and Oscar Caino ‘27 (Swarthmore College Student). Their expertise, guidance, and contributions have been invaluable to the progress of the Polyformer project.

Stay tuned for more updates as we continue to develop and test the Polyformer. Together, we can make a significant impact in reducing plastic waste and promoting sustainable practices at Williams College.

Reefs Reimagined: 3D Printing the Effects of Tsunamis on Coral

Lauren Mukavitz ‘27: In the Makerspace taking the supports off my finished models

Lauren Mukavitz ‘27: In the Makerspace taking the supports off my finished models

When most people think about coral reef degradation, they often think about bleaching and the effects of climate change. However, coral faces another danger that is hardly talked about—tsunamis. Coral reefs have a unique structure that increases the friction a tsunami encounters on its way to the shore, slowing down the wave and mitigating damage. However, the intense forces during a tsunami can be extremely damaging and can destroy entire reefs. To better understand this impact, I embarked on a project for my class Geologic Hazards with Mike Hudak, Assistant Professor of Geosciences, to model coral before and after a tsunami.

Replicating Tsunami Damaged Coral

First, I created an undamaged model that represented a small colony of coral polyps before a tsunami event. I used Ultimaker Cura to design a 3D model of the coral. Next, I wanted to simulate the damage caused by a tsunami. After struggling to find existing methods for modeling tsunami forces on coral, I teamed up with the David Keiser-Clark, Makerspace Program Manager, Elena Sore, Makerspace Student Worker, and Jason Mativi, Science Shop Instrumentation Engineer, to use SolidWorks, a 3D CAD program. We applied a nonlinear analysis with 0.3 bar (or 3E3 N/m^2) of pressure, the estimated force an average piece of coral experiences during a tsunami, to the undamaged model and let SolidWorks create a “deformed” model for us. It took the software approximately four hours to render these forces to the 3D model.

Left: Original coral model; Right: Same model but deformed using SolidWorks to simulate tsunami forces

Left: Original coral 3D model; Right: Same model but deformed using SolidWorks to simulate tsunami forces

The successful PLA print -- Stonefil was not a fan of my design

The successful PLA print — Stonefil was not a fan of my design

Then I had both models printed at the Makerspace. Initially, we tried using Stonefil PLA, a filament that would approximately mimic coral’s composition with its half PLA (a polyester typically derived from fermented plant starch, such as corn, cassava, sugarcane, or sugar beet pulp) and half ceramic powder. However, the model was too intricate for the material, resulting in a messy and unusable print. We ended up using standard PLA for the final models, which, while less accurate in texture, allowed us to proceed with the physical representation. To simulate sediment damage, I took the “deformed” model to the science shop and used a sandblaster. Unfortunately, the PLA was too strong, and the glass beads in the sandblaster didn’t deform as expected. So, we resorted to breaking the model by hand to represent the kind of physical damage coral might endure during a tsunami.

My models are only approximations of the damage coral sustains during tsunamis. The exact forces on coral polyps during these events are unique and complex, making accurate modeling challenging.

Next Steps

The first step to creating a more accurate model would be refining the methods to determine the necessary forces and coefficients. Then, we could use a 3D CAD program like SolidWorks for a more precise analysis. Additionally, applying post-processing techniques to the 3D printed models, such as using adhesives and texturing materials, could make the PLA models physically look-and-feel more like real coral, enhancing their realism.

Creating more accurate models provides a deeper understanding of the interactions between coral reefs and tsunamis, helping us plan better for these events. This knowledge can guide conservation efforts, inform disaster preparedness strategies, and contribute to the broader field of marine biology. As better models are developed, we move closer to mitigating the devastating impacts of natural disasters on vital ecosystems like coral reefs.

The Lincoln Logs: Printing for the WCMA’s Emancipation Exhibition

Introduction: 

WCMA’s “Emancipation: The Unfinished Project of Liberation” exhibit

“Emancipation” exhibit

My most recent Makerspace academic project was assisting Beth Fischer, Assistant Curator of Digital Learning and Research for the Williams College Museum of Art. My task was to 3D print replicas of two sculptures of President Lincoln—Sarah Fisher Ames’ bust of Lincoln and the iconic Abraham Lincoln Life Mask by Clark Mill—as part of the WCMA’s “Emancipation: The Unfinished Project of Liberation” exhibits. These two models complement the work of Hugh Hayden, also present at Emancipation, who incorporates PLA prints into his artistic process. The exhibit emphasizes 3D printing as a relatively accessible medium for creativity and showcases different ways it can assist other styles of art, particularly molds.

Setup 

The two photogrammetry-based 3D models were gorgeous. They defined every ridge, bump, and strand of hair on Lincoln’s head while carrying the texture of the clay, but it was this beauty that posed a challenge. The multidimensional texture in clay is hard to depict using horizontal layers of filament, which is how 3D printers print. Although not a solution, a remedy to this problem was using a hybrid filament – part ceramic and part PLA. Although this filament can’t recreate the vertical complexity of a sculpted model’s texture, it provides a smoother, heavier finish that better resembles the original material. 

We had some leftover StoneFil filament from a previous project, but we knew we would need more to complete both prints. The question was how much more. We did not know how much filament remained on the spools and there was no specific size requested – simply that the two models remain proportional and be as large as possible. 

Naturally, as a math major, I took this as a challenge to maximize the size we could print with only one additional spool of filament. First, I printed two smaller models, noted their xyz scaling, and measured the distance from the nose to the chin. I then used those measurements to find the scale between the height of one and the length of the other. Then, given that scaling, I noted the estimated combined length of the models at a few different sizes and found the factor at which the necessary filament would scale in comparison to the size. In theory, I could approximate the maximum print size given the length of the filament we had left and the spool arriving soon. There was only one problem – we didn’t know how much filament we had. We could weigh the filament, but any statement on the spool-to-filament proportion would’ve been guesswork. 

That was when another Makerspace student worker, Elena Sore, had an idea to create a reference guide for the weight of empty filament spools. We use a variety of brands of filament, and each has a different sized spool. Now, when we finish a spool, we weigh it and enter it into a spreadsheet, allowing us to measure the amount of filament remaining on any given spool by subtracting the spool from the overall weight. 

Printing and Troubleshooting

The final bust with its supports still attached

The final bust with its supports still attached

The time came to print the models. I had decided on the heights 140mm and 93.15, which would give us just enough filament to print both models with enough to spare to be able to still print one more, just in case of failure. I sliced and started the print of the bust and 20 hours later, it came out well. There were a few small holes that indicated mild under-extrusion, but they were not too distracting and the WCMA was interested in showcasing the uniqueness of 3-D prints, so I was perfectly content with the model. 

The second print was not as fortunate. Externally, it looked fine, except the under-extrusion was more visible than the first model. Before removing the model from the plate, I started googling remedies for under-extrusion because I was concerned that I didn’t have enough filament to endure another failure. I recalibrated the printer, increased the nozzle temperature, slightly decreased the printing speed, and ran another mini model with ordinary PLA. It came out perfectly – and that worried me because I was nervous that the problem was with the ceramic filament, which was a requirement for the project. Eventually, I stumbled onto a solution by turning the StoneFil model upside down to examine the supports, and to my shock, I found that they were completely “spaghettified”. The supports had completely failed and were just a mess of tangled filament. I was impressed that the print had managed to build at all. 

The under-extrusion was far more noticeable on the first print of the mask than the bust.

The under-extrusion was far more noticeable on the first print of the mask than the bust.

Exhibition: “Feel free to pick up and touch these reduced-scale 3D prints of Abraham Lincoln!”

Exhibition: “Feel free to pick up and touch these reduced-scale 3D prints of Abraham Lincoln!”

I spent some time in different slicing softwares, trying to optimize the supports. It took (admittedly longer than it should have) to realize that with supports as dense as the model requested, this was a rare case where it would be more filament-efficient and less failure-prone to fill the space underneath the mask with infill, instead of supports. This was the solution we went with, and the bust printed perfectly.

While weighing the options for the final print, David Keiser-Clark, Makerspace Program Manager, and I brainstormed ways of filling in the holes caused by under-extrusion. Our favorite idea, and the only experiment we ran, was using a heat gun to melt a tiny bit of StoneFil filament into the hole and then sand down the excess. It was good in theory, and fun to try, but not entirely effective because it looked like a visible patch. This is because 3D printing filament solidifies incredibly fast after cooling, and we would have needed to either pour a liquid into the hold and/or do a tremendous amount of sanding afterward.

Conclusion

Coincidentally, as the final prints started, I again fell very ill and had to return home for the week and did not get to hand off the pieces. However, I did get the chance to go to the Emancipation exhibit and see the final results. The space itself was a moving experience, and I would strongly encourage anybody to visit or read about the exhibition and its incorporation of 3D printing. This was a fun project to complete during Winter Study, and I got the chance to answer a lot of looming questions about 3D printing during it. I learned a lot about the balance of layer height, print speed, and temperature, I’m excited to see what else we can do with our filament data log, and melting PLA with the heat gun was so much fun that I may try to find a way to make it practical. Although, I must admit, my favorite part of this project is the little Lincoln that found himself a home in my dorm.

An early, miniature prototype that now adorns my desk as a reminder of my work on this WCMA project!

An early, miniature prototype that now adorns my desk as a reminder of my work on this WCMA project!

Lost but Found in the Photogrammetry World

The Quandary:

Have you ever broken or lost a small part of an important object you value? Perhaps the strap of that beautiful watch you got from your grandma or the battery cover for the back of your remote control? You looked for it everywhere, but the part was too “insignificant” to be sold on its own. Or it just wasn’t the sort of thing that anyone would expect to need a replacement.

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

The original black “obsolete plastic object” (on left) keeping files safely stored, alongside the newly cloned red part (on (right)

Last semester at Williams College, Chris Koné, Associate Professor of German and Director of Oakley Center for Humanities & Social Sciences, had a similar experience. He lost an integral part of his desk that allows him to keep his files neatly stored and organized (shown on picture). Desperate to have a place for the files and papers scattered miserably on the floor, Prof. Koné looked in a brick and mortar NYC office parts store, as well as on Amazon, eBay, and other e-commerce websites, but alas, the object was nowhere to be found. It had become obsolete!

The “obsolete plastic object”

The “obsolete plastic object”

Determined to leave no stone unturned in finding a replacement for the obsolete plastic object, Prof. Koné did what any sensible person with access to the Makerspace would do – he asked for a 3D-printed model of the object! And it is here that he met me, an intern working at the Makerspace over the summer. In the process of helping him, I learned about multiple methods of photogrammetry and created a significantly more efficient and streamlined workflow for the Makerspace. 

Some Background

As a new student worker with zero knowledge about photogrammetry and 3D printing, David Keiser-Clark, the Makerspace Program Manager, thought this project would be just the right amount of challenge for me. Photogrammetry is the process of creating a 3-dimensional digital model of an object by taking dozens or hundreds of photos of the object from different angles and processing them with software to create a digital spatial representation of the object. Doing this project would be a good introduction to the 3D digital world while allowing me to get acquainted with the Makerspace.

If you have tried photogrammetry, you know that some of the most difficult objects to work with are those that are dark or shiny. This object was dark and shiny! When an object is dark, it becomes difficult for the software to distinguish one feature on the object from another, resulting in an inaccurate digital representation. Likewise, light is reflected when an object is shiny, resulting in images that lack details in the shiny areas. Thus, you can imagine how challenging it is when your object is both shiny and dark!

Step 1

The first step was to figure out how to reduce the darkness and shininess of the object. To kill both birds with one stone, I covered the object with white baby powder, a cheaper alternative to expensive photogrammetry sprays used in industry. The powder’s white color would help eliminate the object’s darkness and offer it some helpful texture, while its anti-reflective nature would reduce shininess. After several attempts to completely cover the object, this method proved ineffective as the powder would not stick to the object’s smooth surface. A little out-of-the-box thinking led me to cover the object with matte blue paper tape, which proved very effective as the tape’s rough texture allowed minimum light reflection. 

obsolete plastic object coated with blue tape

obsolete plastic object coated with blue tape

A Bit of Photography 

Milton taking pictures for photogrammetry

Milton taking pictures for photogrammetry

Now that the two biggest giants had been slayed, it was time to move on to the next step: taking pictures of the object. Taking shots for photogrammetry is very similar to doing stop-motion animation. You take a picture of the object, move it at a small angle (between 5-15 degrees) by hand or with a turntable (a rotating disc), and take another picture. Then you repeat this process until the object has rotated completely, change the camera angle (e.g., by taking shots from the top of the object), and redo the whole process again. This can be quite tedious, especially if you have to do it by hand, but luckily for me, the Makerspace had recently bought a new automated turntable, so I didn’t have to rotate the object manually. I also got to be the first to create a documentation guide for other Makerspace student workers to more easily be able to utilize the turntable in the future!

Alignment Process

Once the photos were ready, the next step was to analyze them using photogrammetry software. I turned to Agisoft Metashape, a powerful program that receives pictures of an object from different angles and analyzes them to create a 3D depiction of the object. The software first finds common points between the various images, called anchor points, and calculates their relative distances, allowing the software to place them in a 3D place. This process is called alignment.

Unfortunately, despite my efforts to aid the software by covering the object with matte blue tape to reduce its shininess and darkness, the obsolete plastic object did not align properly in Metashape. While I could not pinpoint the exact reason, I suspect it was due to its hollow shape, which made it challenging for the software to capture points on the inner surfaces, especially the corners. It was quite disappointing to get these results, especially after having had to wade through Metashape’s jungle of commands, but that was certainly not the end of it all. I decided to try a different approach – raise an older desktop 3D scanner from the grave!

Misalignment in Metashape

Misalignment in Metashape

The Hewlett Packard (HP) 3D Structured Light Scanner

The 3D David Scanner (now called the HP 3D Structured Light Scanner) works by projecting light onto a subject and capturing the reflection. It measures the time taken for the light to return, determining the distance of each point. These points, represented as XYZ coordinates, are collectively used to digitally reconstruct the object in a 3D space. I intended to use the structured light scanner as an alternative to Metashape software because it allows more control over the alignment process. For example, you can select two specific images you want to align and tell the software how you want them to get aligned. In addition, the scanner features a projector that sheds light on the project you’re scanning, as well as a calibrated background panel, allowing for greater detail to be picked up. 

HP 3D Structured Light Scanner

HP 3D Structured Light Scanner

A Bit of Scanner Surgery

Using the HP 3D Structured Light Scanner

Using the HP 3D Structured Light Scanner

The Makerspace’s HP scanner unfortunately hadn’t been functional in over three years. The camera was not working, and the scanner’s software could not make exports due to licensing issues. I updated the device’s software and installed new camera drivers, and in no time, the scanner was fully functional again. I then scanned the obsolete plastic object with the structured scanner. Unfortunately, the results were unsatisfactory. It resolved the prior alignment issue with Metashape, but the digital model had thin walls and holes on some of its surfaces, making it impossible to print. 

Thin walls and holes in the structured light scanner model

Thin walls and holes in the structured light scanner model

Building from the Ground Up with Fusion 360

Results of different lighting setting in HP 3D Structured Light Scanner

Results of different lighting setting in HP 3D Structured Light Scanner

After trying out different strategies with the HP 3D Structured Light Scanner, such as different light settings, but still not getting good results, David suggested a different method – building the model from scratch! Excited to try out new software (and get a break from the structured scanner!), I began exploring Fusion 360 tutorials and documentation. Autodesk Fusion 360 is a Computer-Aided Design (CAD) software with applications across various sectors, including manufacturing, engineering, and electronics. It allows one to create a simple sketch of a model and build it into a solid model with precise dimensions. You can even add simulations of real-world features such as material sources and lighting. 

Of course, this new, complicated, piece of software came with its challenges. For example, I had to know the dimensions of the fillets (the arcs) inside and outside my object. A little creativity combined with a pair of vernier calipers and a piece of paper did the job. Another challenge was understanding the timeline feature of Fusion 360, one of the most important features of the program, which allows you to record your progress and go back to a certain point. Researching online and getting help from a friend (shoutout to Oscar!) with more experience in Fusion 360 proved helpful in better understanding the software. 

Successful Fusion 360 model of the obsolete plastic object

Successful Fusion 360 model of the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

Fusion 360 timeline for modeling the obsolete plastic object

The Obsolete Plastic Object Was No Longer Obsolete

Finally, after several days of learning Fusion 360 and incrementally building a model, the obsolete plastic object was no longer obsolete. I produced an accurate model of the object and printed several copies, which Professor Koné was more than happy to receive. His files had regained their home, and time spent scouring eBay and Amazon for a nameless object had come to an end!

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

The red part (right), is the new clone of the original black “obsolete plastic object” (on left). Files are once again safely organized.

Conclusion

My experience working on photogrammetry and 3D modeling at the Makerspace was certainly full of twists and turns but definitely worth it. I learned how to use more than three very complicated software applications, significantly improved the Makerspace photogrammetry procedure (reduced a 3-month process to 1-2 days), and approached new challenges with an open mind.

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Prof. Koné and myself holding the original (covered in blue tape) and a newly printed black 3D “obsolete” plastic object

Next Steps

I look forward to exploring other methods of photogrammetry, particularly ones that require less equipment, such as those that use only a smartphone. Reality scan is one promising alternative that can create lower-resolution scans and models in less than 15 minutes. With new technologies coming out every day, there are many avenues to explore, and I’m excited to discover better methods. 

Screenshot: Experimenting with the Reality Scan smartphone app

Screenshot: Experimenting with the Reality Scan smartphone app

Truly-Local Internet: The PiBrary Project

Raspberry Pi 4 Model B

Figure 1: A Raspberry Pi Model B

If a local organization has important information for its neighbors, is there a way it can broadcast directly to them without bouncing the data to Toronto and back? I grew up here in the Berkshires, and have recently joined the Office of Information Technology (OIT) at Williams. Thinking about Williams College’s commitment to community service and support, my project goal was to demo a low-cost, low-maintenance device which a local organization could use to easily broadcast information and resources over WiFi directly to nearby cell phone users through familiar, standard methods — (1) connecting to a WiFi network, and (2) navigating to a website address in a browser — without needing national or global infrastructure, or specialized equipment or technical skills on either side of the connection. Such an “internet-in-a-box” model could have useful applications in emergency scenarios, but also could provide curated information or resources digitally to multiple people in other specific, time-limited places and moments — for example, at a festival, workshop, teach-in, or other community event.

Figure 1: Wilmington VT, a mere 40-minute drive from Williams.

Figure 2: Wilmington VT, a mere 40-minute drive from Williams.

Let’s give this idea some real context. Imagine a small town in nearby southern Vermont – say, Wilmington. It’s late August, and a storm rips through, dropping 8 inches of rain in 24 hours, washing out roads and bridges, and knocking cell towers and internet infrastructure offline, leaving you without any connectivity for days. Local fire, rescue, and police services, town government, even your electric company, typically use websites, social media, and text messages to communicate critical information — but now, those methods don’t work. Where can you go for information regarding emergency food, shelter, medical care? Is the water safe to drink? When will power be restored? Where are the downed power lines and flooded roads? You’re both literally and figuratively in the dark.

No need to imagine: this actually happened in 2011, with Tropical Storm Irene. Superstorm Sandy in 2012 presented a similar case. And just this April, a single fiber optic cable damaged by a late-season snowstorm shut down Berkshire businesses for a day.

Truly local connections literally do not exist on the modern Internet. Are you on campus and want to view the williams.edu website? That data lives on a server in Toronto, and travels through 6 or 7 intermediary servers (including places like New Jersey and Ohio) before it lands in your cell phone’s browser (also producing 0.5g CO2 each visit). Under normal conditions, this globalized infrastructure is reliable, and has important benefits. But it’s useful to think about the edge cases. Climate change is bringing more unpredictable severe weather events. Rural areas like ours are often underserved by internet service providers (ISPs), which often have little financial incentive to invest in maintaining or expanding infrastructure.

This post offers a guide to creating your own DIY hyper-local webserver. If you can write a webpage (in plain HTML) and are open to my guidance in using a command line: follow along!

Required Equipment and Steps

Required hardware: Raspberry Pi 4 Model B, Power Supply, PiSwitch, and 32GB MicroSD card with Raspberry Pi OS.

Figure 3: Required hardware: Raspberry Pi 4 Model B, Power Supply, PiSwitch, and 32GB MicroSD card with Raspberry Pi OS installed.

I decided to build using a Raspberry Pi 4 Model B single-board computer. The Pi is about the size and weight of a deck of cards, and runs a version of Linux, an open-source operating system (OS).

There were two tweaks I determined were necessary to make the Pi ready to play the role I imagined. First: I needed to enable the Pi to act as a webserver, rather than a desktop. Second: I needed to adjust the Pi’s built-in WiFi connection to broadcast, rather than receive, a WiFi signal.

Tweak 1: Webserver Setup

Globally, 30% of all known websites use Apache, an open-source webserver software launched in 1995. I installed Apache on the Pi through the command line, using the command:

sudo apt install apache2

Now, any content I wanted to broadcast to other users I could simply place into the preexisting folder at /var/www/html/. I wrote a home page, an “about” page, and created 4 subfolders loaded with some open-licensed content (PDFs, audio and video files). You can check out my content (and adapt it if you like!) at github.com/gpetruzella/pibrary.

Tweak 2: Adjusting WiFi to Broadcast

I then used the following on the command line to tell the Pi to broadcast as a WiFi hotspot:

sudo nmcli device wifi hotspot ssid <my-chosen-hotspot-name> ifname wlan0

(The Pi’s built-in WiFi device is named “wlan0”; I chose to name my hotspot “pibrary”.)

Now, the Pi was broadcasting a WiFi hotspot, which other devices would be able to see and connect to. But… I wanted to make sure this happened automatically every time the Pi was switched on. To accomplish that, I needed to find the new hotspot’s UUID, then use that in one final configuration step. I found the hotspot’s UUID by running:

nmcli connection

This displayed a table with multiple rows: I found the “pibrary” row and copied its UUID. Then, I ran:

sudo nmcli connection modify <pibrary’s UUID> connection.autoconnect yes connection.autoconnect-priority 100

With this modification completed, simply switching on the Pi will automatically start broadcasting a WiFi signal (as a “hotspot” or source), with no extra steps.

Connecting from a Nearby Mobile Phone

screenshot of the homepage at pibrary.local.

Figure 4: Viewing the homepage at pibrary.local.

Now, the Pi was both “serving” webpages, and broadcasting a WiFi hotspot. Even without Internet — such as during power outages — any nearby user could find and connect to the WiFi hotspot on their phone… but what “web address” would they type in the browser to reach the content? The final piece of the puzzle requires knowing the Pi’s “hostname”. When I first set up my Pi, I gave it the hostname pibrary (just like the hotspot). The domain name

.local

is a special-use domain name reserved for local network connections. So, once a cell phone has connected to the “pibrary” WiFi hotspot, that user can type

pibrary.local

into the browser to reach the homepage I had set up in Tweak 1. Finding your own Pi’s hostname is as easy as entering the following on the command line:

hostname

Experiencing the Local Website

Below are a few screenshot examples of navigating the PiBrary resources from an Android phone.

screenshot of streaming an open-licensed video.

Figure 5: You can stream an open-licensed video.

screenshot: accessing directories of open-licensed learning resources.

Figure 6: You can access directories of open-licensed learning resources.

screenshot: viewing a PDF.

Figure 7: You can view PDFs.

Challenges and Future Expansions

One limitation of this implementation is the range of consumer-grade WiFi: the maximum signal distance is roughly 90 meters under ideal conditions. The HaLow (802.11ah) standard offers up to 1km of range; but today’s consumer cell phones aren’t built to use that standard. One solution could use HaLow to send data from one Pi to another – say, one in town hall and another at a fire station (if each has an inexpensive HaLow module installed), with each one serving its own nearby neighborhood over standard WiFi. Alternatively, even off-the-shelf home mesh or WiFi “extender” hardware could improve the reach of this model without significant cost.

A second challenge: maintaining and editing content. My ideal use-case was for non-technical community members (e.g. in public safety or town government) to easily push information, announcements, etc. However, this demo succeeded because I knew how to edit and manage webpage content directly (i.e. by writing HTML). For a non-technical community member, using the bare Apache webserver this way could be a significant barrier to easy deployment or quick posting, especially in the environment of a public emergency. To address this, I would like to explore whether the YunoHost open-source server management application is compatible with the PiBrary project. YunoHost offers a very familiar and robust web editing interface, plus other possible additional services, such as email hosting.

In terms of sustainability, a lightweight Pi-hosted local site radically reduces the total carbon impact of each site visit, even taking into account the fact that Williams’ website is “hosted green”. A fascinating expansion of this project would be to use sustainable web design principles and standards as a standard part of the college’s digital presence.

Finally, privacy. Unlike ordinary internet browsing, which has many elements protecting and encrypting the flow of data, this demo creates a simple, direct, unencrypted WiFi network. (You may have noticed the “insecure alert” icon next to the address in some of the screenshots above.) In the absence of any technical trust guarantees, this setup is suitable only for very specific cases where the connection between server and client is based on human trust – like in a local community!

Thanks to David Keiser-Clark, Makerspace Program Manager, and to my colleagues on the Academic Technology Services team, for their support in developing this prototype.

Simulating Spaces with AR

Fig.1 This is me standing in front of Chapin Hall, using my tablet to view my AR model (see below) superimposed as a "permanent object" onto the Williams campus.

Fig.1 This is me standing in front of Chapin Hall, using my tablet to view my AR model (see below) superimposed as a “permanent object” onto the Williams campus.

At age nine, I had a bicycle accident (and yes, for those who know me, I can’t swim, but I can pretty much ride a bike, thank you!). It was not that unusual compared to how you usually fall from a bike: I was going up perhaps faster than what my mom allowed me at the time, and I bumped into a really, really, BIG rock. In great pain, someone nearby picked me up and, crying very much, I said: “I want to go home, give me my tablet.” A very Gen-Z answer from me, and I don’t recommend that readers have such an attachment to their devices. But let’s be honest—would I have been in such a situation at the time if I was peacefully playing the Sims instead of performing dangerous activities (such as bike riding) in real life? Is there a fine line between real and virtual? Can I immerse myself in a virtual environment where I *feel* like I drive without actually driving *insert cool vehicle*?

 

 

Fig. 2: I created this sketch of "maker space" in Procreate on my tablet.

Fig. 2: I created this sketch of “maker space” in Procreate on my tablet.

Augmented Reality (AR) is something I have been interested in learning more about as an internet geek. Although I count stars for a living now (I am an astrophysics major), I am still very much intrigued by the world of AR. Whenever there is a cool apparatus in front of me, I take full advantage of it and try to learn as much as I can about it. That’s why one of my favorite on-campus jobs is at the Williams College Makerspace! It is the place where I get to be a part of a plethora of cool projects, teach myself some stuff, and go and share it with the world (i.e., as of now, the College campus and grand Williamstown community!). Fast forward to my sophomore year of college, Professor Giuseppina Forte, Assistant Professor of Architecture and Environmental Studies, reached out to the Makerspace to create a virtual world using students’ creativity in her class “ENVI 316: Governing Cities by Design: the Built Environment as a Technology of Space”. The course uses multimedia place-based projects to explore and construct equitable built environments. Therefore, tools like Augmented Reality can enhance the students’ perspectives on the spaces they imagine by making them a reality.

This project could not have been possible without the help of the Makerspace Program Manager, David Keiser-Clark. He made sure that there was enough communication between me and Professor Forte so that deadlines for the in-class project completion were met, as well as the Williams College “Big Art Show”. In short, my role was to help students enhance their architectural designs with augmented reality simulations. This process involved quite a few technical and creative challenges, leading to a lot of growth as a Makerspacian, especially having no background in AR before taking part in this project!

Choosing Tools and Techniques

My role in this project was to research current augmented reality softwares, select one, and then teach students in the course how to utilize it. In consultation with Giuseppina and David, we chose Adobe Aero because it’s free, easy to use, and has lots of cool features for augmented reality. Adobe Aero helps us put digital stuff into the real world, which is perfect for our architectural designs in the “ENVI 316: Governing Cities by Design” course. I then set up a project file repository and inserted guides that I created, such as “Interactive Objects and Triggers in Adobe Aero” and “How to Use Adobe Aero”. This documentation is intended to help students and teaching assistants make their own AR simulations during this — and future — semesters. This way, everyone can try out AR tools and learn how to apply them in their projects, making learning both fun and interactive.

AR Simulations: My process

Fig. 3: I have successfully augmented reality so that, viewed through a tablet, my "maker space" 3D model now appears to be positioned in front of Chapin Hall at Williams College.

Fig. 3: I have successfully augmented reality so that, viewed through a tablet, my “maker space” 3D model now appears to be positioned in front of Chapin Hall at Williams College.

Once we had all the tools set up with Adobe Aero, it was time to actually start creating the AR simulations. I learned a lot by watching YouTube tutorials and reading online blogs. These resources showed me how to add different elements to our projects, like trees in front of buildings or people walking down the street.

Here’s a breakdown of how the process looked for me:

  1. Starting the Project: I would open Adobe Aero and begin a new project by selecting the environment where the AR will be deployed. This could be an image of a street or a model of a building façade.
  2. Adding 3D Elements: Using the tools within Aero, I dragged and dropped 3D models that I previously created in Procreate into the scene. I adjusted their positions to fit naturally in front of the buildings.
  3. Animating the Scene: To bring the scene to life, I added simple animations, like people walking or leaves rustling in the wind—there was also the option to add animals like birds or cats which was lovely. Aero’s user-friendly interface made these tasks intuitive, and videos online like this one were extremely helpful along the way!
  4. Viewing in Real-Time: One of the coolest parts was viewing the augmented reality live through my tablet. I could walk around and see how the digital additions interacted with the physical world in real-time.
  5. Refining the Details: Often, I’d notice things that needed adjustment—maybe a tree was too large, or the animations were not smooth. Going back and tweaking these details was crucial to ensure everything looked just right. Fig. 1, 2 & 3 show an example of a small project I did when I just started.

Final Presentation: The Big Art Show

Figures 4 and 5 show side-by-side comparisons of real-life vs AR spaces as presented in the Williams College “Big Art Show” in the fall semester 2024. The student who used the AR techniques decided to place plants, trees, people and animals around the main road to make the scene look more lively and realistic. 

Fig. 4: Exhibition at the "Williams College Big Art Show" featuring 3D printed houses and buildings alongside a main road.

Fig. 4: Exhibition at the “Williams College Big Art Show” featuring 3D printed houses and buildings alongside a main road.

Fig. 5: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Fig. 5: Live recording of an AR space in Adobe Aero, enhanced with added people, trees, and birds to create a more memorable scene.

Lessons Learned

Reflecting on this project, I’ve picked up a few key lessons. First, jumping into something new like augmented reality showed me that with a bit of curiosity, even concepts that seem hard at first become fun. It also taught me the importance of just trying things out and learning as I go. This project really opened my eyes to how technology can bring classroom concepts to life—in this case, the makerspace!—making learning more engaging. Going forward, I’m taking these lessons with me.