The RasPi MilliTome: a Manual Sand Slicer for 3D Reconstruction

by DaniloR29 in Teachers > University+

3587 Views, 11 Favorites, 0 Comments

The RasPi MilliTome: a Manual Sand Slicer for 3D Reconstruction

MainImage.png

My first Instructables project was the Magic Sand Slicer, a Rube Goldberg machine–style device that automatically slices a cylinder of kinetic sand to reveal its internal three-dimensional texture. This structure is created by mixing coloured sand blobs, forming patterns similar to those observed in tomographic 3D reconstruction.

In this project, I present a simpler manual version of the device that is easier to build and operate. It includes a support tower for a Raspberry Pi camera module, adapted from another of my previous projects called Rapendula. This project is particularly well-suited for STEM education, as it combines 3D printing, basic mechanics, computer interfacing, image acquisition, and programming. Beyond the technological aspects, it provides an intuitive way to understand how three-dimensional structures can be reconstructed from a sequence of two-dimensional slices.

From a historical perspective, this device reproduces—at a macroscopic and accessible level—a technique widely used by early microscopists, paleontologists, and naturalists to investigate the internal structure of complex biological and geological specimens. Before the advent of modern imaging technologies, serial sectioning was one of the primary methods used to reconstruct three-dimensional structures from two-dimensional slices. In this sense, the present device can be seen as a modern, low-cost reinterpretation of these classical experimental approaches.

This principle is still widely used across many areas of science today. For example, in paleontology, serial sectioning can be used to reconstruct fossils embedded in rock. In microscopy techniques such as confocal or light-sheet imaging, it enables the 3D visualization of transparent specimens. More broadly, medical imaging methods such as X-ray tomography and MRI rely on the same fundamental idea to reconstruct the internal structure of the human body.

Supplies

  1. One rectangular perforated metal plate, approximately 60 × 200 mm
  2. 6 M4 screws, 20 mm in length
  3. 2 M4 butterfly bolts
  4. 4 M4 bolts
  5. 2 M4 screws, 10-20 mm in length.
  6. 6 Small screw for the extruder assembly
  7. 2 small screw to fix the Pi camera to the holder.
  8. 1 M7 Screw 75 mm long
  9. 2 M2 screw
  10. One 40 cm aluminium ruler
  11. A 3D printer. For 3D printing, I have used a Creality Ender Pro with 1.4 mm PLA filament. For the slicer, I have used PrusaSlicer and Cura. As the printing can take time, I used a 0.28 mm slice thickness without affecting the accuracy of the device.
  12. Superglue
  13. A 32 mm Chisel
  14. Two sets of kinetic sand with two contrasting colours.
  15. Raspberry Pi Zero W2 + Pi camera (see my Project Rapendula for other information)

Device Base

BaseAssembly.png
IMG_1694.JPG
IMG_1730 copy.JPG

The device was assembled on a perforated metallic plate as outlined in the project Rapendula. This dual-plate base provides added weight and stability.

After 3D printing the supporting base for the Pi Camera, Raspberry Pi, and Ring LED, along with the extruder (Extrud_Base.stl file), they’re secured to the platform using M4 screws and bolts (see Figure). For quicker disassembly, butterfly bolts are used for the photographic tower.

A 30 cm metallic ruler is inserted into the slot in the support and secured with an lateral M2 screw.

PiCamera, Raspberry Pi and Led Ring Support Platfrom

LedRing.png
IMG_1747 2.JPG
IMG_1746 2.JPG
IMG_1728.JPG
IMG_1727.JPG
IMG_1726.JPG
IMG_1729.JPG
IMG_1674 copy.JPG

3D print the other STL file for the other components.

1) Secure the Camera+RaspPiSupport section to the ruler using an M4 10-20mm bolt (see Figure 1).

2) Assemble the LED ring case. We’ve repurposed the LED and control electronics from a cheap smartphone self-portrait clip (see Figure 2-3). Insert and secure these parts into the case and enclose the electronics with the lid using two small screws (see Figure 4-6).

3) Ensure the LED ring case slides smoothly on the horizontal shelf of the camera support.

4) Attach the 3D-printed Pi camera holder to the camera using two small screws. Mount the Raspberry Pi and Pi camera on the support. Insert the cylindrical part of the camera holder into the slot on the horizontal shaft (see Figure 7).

5) Complete the camera mounting system by attaching the LCD and RGB LED to the Raspberry Pi case (see the last Figure). The instructions and STL files for this section are included in my project Rapendula.

Extruder Cylinder With Support

IMG_1723.JPG
IMG_1722.JPG
IMG_1720.JPG
IMG_1719 copy 2.JPG
IMG_1719 copy.JPG
Presentation2.jpg
IMG_1721.JPG
IMG_1731.png
Extruder.png

3D print all the required STL files for the components.

  1. Insert the M7 screw into the 3D-printed wheel and secure the head to the wheel using glue (see Figures 1 and 2).
  2. Insert the M7 bolt into its housing. A vice can be used to press-fit it into the 3D-printed part.
  3. Place the bolt housing into the hexagonal opening and glue it in position (see Figure 4).
  4. Insert the cylinder ring into the cylinder block on the side of the bolt housing (see Figure 4).
  5. Mount the cylinder into the support legs (see Figure 5) and secure it with three screws (see Figure 6).
  6. Engage the M7 screw with the bolt into the extruder cylinder (see Figure 6).
  7. Insert the piston into the rectangular cavity of the cylinder as shown in Figures 7 and 8.
  8. Position the cylindrical end of the extruder assembly into the corresponding support base socket.
  9. Secure it with three small screws, as shown in Figure 6.

Adding the Platfrom

IMG_1724.JPG
IMG_1730.JPG
IMG_1677.JPG
Extruder+Platfrom.png

3D print the necessary STL files for the components.

Place the platform on top of the cylinder and secure the Blade guide to it using two M2 screws as shown in Figure 1. Insert the sand-collecting bin into the platform hole as shown in Figure 2. The final configuration is illustrated in the last figure (the loading with sand is explained later).

Python Program to Control the Photo Collections Using Raspberry Pi

This Python program controls the image acquisition process and provides a live preview of the slicing experiment using a Raspberry Pi setup. It is designed to work with a Pi Camera module and a small display, allowing the user to monitor and capture images of each sand slice in real time.

The main purpose of the program is to:

  1. display a live camera feed,
  2. allow the user to trigger image capture,
  3. store the captured images in sequence,
  4. provide visual feedback during the slicing process.

How it works

The script initializes the camera and display, then enters a continuous loop where it:

  1. Captures live images from the camera
  2. The Pi Camera continuously streams frames, which are shown on the screen. This helps align the sand cylinder and ensure consistent framing.
  3. Displays a real-time preview
  4. The live feed is resized to fit the display and updated continuously, giving immediate visual feedback during operation.
  5. Waits for user input
  6. A button (or key input, depending on your setup) is used to trigger image acquisition. This allows you to manually control when each slice is recorded.
  7. Captures and saves images
  8. When triggered, the program saves a high-resolution image to disk. The images are stored sequentially, forming a stack that represents the internal structure of the sand.
  9. Provides visual feedback
  10. After each capture, the program may briefly indicate that the image has been saved (for example, by updating the display or printing a message).

Output

The result of this process is a series of images:

slice_001.jpg
slice_002.jpg
slice_003.jpg
...

These images correspond to consecutive physical slices of the sand cylinder and can later be used for:

  1. image segmentation
  2. 3D reconstruction
  3. volumetric visualization


#!/usr/bin/env python3

import os
import re
import json
import time
import threading
import numpy as np
import cv2
import RPi.GPIO as GPIO
from tkinter import *
from picamera import PiCamera
from picamera.array import PiRGBArray
from PIL import Image, ImageTk, ImageDraw
import st7735

# ================== CONFIG ==================

BUTTON_PIN = 27
LED_R_PIN = 5
LED_G_PIN = 6
LED_B_PIN = 12

PHOTO_FOLDER = "/home/danilo/photos"
SETTINGS_FILE = "/home/danilo/camera_settings.json"

PREVIEW_RES = (240, 240)
FULL_RES = (2592, 1944)

os.makedirs(PHOTO_FOLDER, exist_ok=True)

# ================== AUTO FILE NUMBERING ==================

def get_next_index():
files = os.listdir(PHOTO_FOLDER)
numbers = []
for f in files:
m = re.match(r"photo_(\d+)\.jpg", f)
if m:
numbers.append(int(m.group(1)))
return max(numbers)+1 if numbers else 1

photo_index = get_next_index()

# === LED COLOR MAP ===
COLOR_MAP = {
'red': (1, 0, 0),
'green': (0, 1, 0),
'orange': (1, 1, 0),
'off': (0, 0, 0)
}


def set_led_color(name):
r, g, b = COLOR_MAP.get(name, (0, 0, 0))
GPIO.output(LED_R_PIN, r)
GPIO.output(LED_G_PIN, g)
GPIO.output(LED_B_PIN, b)


# ================== GPIO ==================

GPIO.setmode(GPIO.BCM)
GPIO.setup(BUTTON_PIN, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(LED_R_PIN, GPIO.OUT)
GPIO.setup(LED_G_PIN, GPIO.OUT)
GPIO.setup(LED_B_PIN, GPIO.OUT)

# ================== CAMERA ==================

camera = PiCamera()
camera.resolution = PREVIEW_RES
camera.framerate = 24
camera.rotation = 270
time.sleep(2)

raw_capture = PiRGBArray(camera, size=PREVIEW_RES)

# Proper AWB lock
camera.exposure_mode = "auto"
camera.awb_mode = "auto"
time.sleep(3)
camera.exposure_mode = "off"
gains = camera.awb_gains
camera.awb_mode = "off"
camera.awb_gains = gains

# ================== TFT ==================
disp = st7735.ST7735(
port=0, cs=0, dc=24, rst=25,
backlight=19, spi_speed_hz=16000000,
width=240, height=240,
rotation=270,
offset_left=80,
offset_top=0
)

disp.begin()

# ================== SETTINGS ==================

settings = {
"iso": 200,
"shutter_speed": 0,
"brightness": 50,
"contrast": 0,
"target_size": 120
}

if os.path.exists(SETTINGS_FILE):
with open(SETTINGS_FILE, "r") as f:
settings.update(json.load(f))

def save_settings():
with open(SETTINGS_FILE, "w") as f:
json.dump(settings, f, indent=4)

def apply_settings():
camera.iso = settings["iso"]
camera.brightness = settings["brightness"]
camera.contrast = settings["contrast"]

if settings["shutter_speed"] > 0:
camera.shutter_speed = settings["shutter_speed"]
camera.exposure_mode = "off"
else:
camera.exposure_mode = "auto"

apply_settings()

# ================== SHARED DATA ==================

latest_image = None
latest_gray = None
sharpness_value = 0
histogram = None
clipping_warning = ""
lock = threading.Lock()

# ================== CAPTURE ==================

def capture_photo():
global photo_index

filename = os.path.join(
PHOTO_FOLDER,
f"photo_{photo_index:03d}.jpg"
)
photo_index += 1

camera.resolution = FULL_RES
time.sleep(0.3)
camera.capture(filename, quality=95)
camera.resolution = PREVIEW_RES
time.sleep(0.3)

print("Saved:", filename)

# ================== CAMERA LOOP ==================

def camera_loop():
global latest_image, latest_gray, sharpness_value
global histogram, clipping_warning

for frame in camera.capture_continuous(
raw_capture,
format="bgr",
use_video_port=True):

img = frame.array
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Sharpness metric
sharpness_value = cv2.Laplacian(gray, cv2.CV_64F).var()

# Histogram
histogram = cv2.calcHist([gray],[0],None,[256],[0,256])

# Clipping detection
if histogram[0] > 1000:
clipping_warning = "Shadow Clipping"
elif histogram[255] > 1000:
clipping_warning = "Highlight Clipping"
else:
clipping_warning = ""

# Convert for display
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
pil_img = Image.fromarray(img_rgb)

draw = ImageDraw.Draw(pil_img)
w, h = pil_img.size
cx, cy = w//2, h//2

# Crosshair
draw.line((cx,0,cx,h), fill=(0,255,0))
draw.line((0,cy,w,cy), fill=(0,255,0))

# Target square
size = settings["target_size"]
half = size//2
draw.rectangle(
(cx-half, cy-half, cx+half, cy+half),
outline=(255,0,0),
width=2
)

disp.display(pil_img)

with lock:
latest_image = pil_img.copy()

raw_capture.truncate(0)

if GPIO.input(BUTTON_PIN):
set_led_color('orange')
capture_photo()
time.sleep(1.0)
set_led_color('green')
while GPIO.input(BUTTON_PIN):
time.sleep(0.05)

# ================== GUI ==================

root = Tk()
root.title("Milliscope Instrument Mode")

video_label = Label(root)
video_label.pack()

info_label = Label(root, font=("Arial", 12))
info_label.pack()

hist_canvas = Canvas(root, width=400, height=100, bg="black")
hist_canvas.pack()

# ================== CONTROL SLIDERS ==================

control_frame = Frame(root)
control_frame.pack(pady=10)

def make_slider(label, from_, to_, command, initial):
frame = Frame(control_frame)
frame.pack()
Label(frame, text=label, width=22).pack(side=LEFT)
scale = Scale(frame, from_=from_, to=to_,
orient=HORIZONTAL, command=command, length=250)
scale.set(initial)
scale.pack(side=RIGHT)

def update_iso(val):
settings["iso"] = int(val)
apply_settings()

def update_shutter(val):
settings["shutter_speed"] = int(val)
apply_settings()

def update_brightness(val):
settings["brightness"] = int(val)
apply_settings()

def update_contrast(val):
settings["contrast"] = int(val)
apply_settings()

def update_target(val):
settings["target_size"] = int(val)

make_slider("ISO", 100, 800, update_iso, settings["iso"])
make_slider("Shutter (µs, 0=auto)", 0, 300000, update_shutter, settings["shutter_speed"])
make_slider("Brightness", 0, 100, update_brightness, settings["brightness"])
make_slider("Contrast", -100, 100, update_contrast, settings["contrast"])
make_slider("Target Square Size", 20, 220, update_target, settings["target_size"])

Button(root, text="Save Settings", command=save_settings).pack(pady=10)

def update_gui():
global latest_image

with lock:
img = latest_image

if img is not None:
imgtk = ImageTk.PhotoImage(img.resize((400,400)))
video_label.imgtk = imgtk
video_label.configure(image=imgtk)

# Update info
info_label.config(
text=f"Sharpness: {sharpness_value:.1f} {clipping_warning}"
)

# Draw histogram
hist_canvas.delete("all")
if histogram is not None:
hist_norm = (histogram / histogram.max()).flatten()

for i in range(256):
x = float(i * (400/256))
y = float(100 - (hist_norm[i] * 100))
hist_canvas.create_line(x, 100, x, y, fill="white")


root.after(100, update_gui)

# ================== START ==================

set_led_color('red')
threading.Thread(target=camera_loop, daemon=True).start()
update_gui()

root.mainloop()

GPIO.cleanup()
camera.close()

Preparing the Kinetic Sand

Presentation3.jpg
IMG_1737.JPG
IMG_1697.JPG
IMG_1696.JPG

3D print the sand mould parts. These moulds create a rectangular block of sand that fits into the cylinder opening. The procedure below explains how to make a simple coloured spherical shape inside this uniform block.

A) Fill the mold with a uniform colour of kinetic sand, and use the sand pusher to compact it

B) Use a craft knife to split the sand block into two parts. Then open the mould to reveal the two sand sections. Impress the solid object onto one side to create the mould. In this case, a marble was used.

C) Therefore, align and push together the other half of the sand block until the two faces match again. This will deform the block, and the sand pusher is needed to compact the sand around it.

D) Using a small rotation movement and the blade, separate the two parts again. Removing the object reveals the hollow space left by its imprint.

E) Carefully refill the hollow space in the two mould halves with contrasting coloured kinetic sand. A wax carver tool will be useful for this.

Once the two parts are filled and compacted, you can reassemble the sand blocks using the form. Align the form with the square hole on the extruder’s top and slide the sand block into the cylinder using the sand pusher. To do this, remove either the camera on top of the extruder or the extruder block from the base (refer to the last three figures). Finally, use the pushed sand to gently press down and compact it into the cylinder.

System Setup, Calibration, and Image Acquisition

IMG_1718 copy.JPG
IMG_1705.JPG
IMG_1718.JPG
IMG_1690.JPG
IMG_1691.JPG
IMG_1692.JPG

Start the Raspberry Pi and upload the Python program to it. Create a folder called photos, then run the script.

Figure 1 shows the program interface as it appears on your computer screen. Adjust the camera settings to improve image quality. Capture a test image of the first slice and check that the high-resolution image is correctly saved in the ./photosfolder.

Once you are satisfied with the settings, use the Save button (not visible in the figures) to store the parameters. These will be automatically reloaded the next time the program starts.

The LCD display should show the same view as the currently exposed section of the sand (Figure 2). Note that the image in Figure 2 differs from Figure 1 because it was taken at a different stage of the experiment.

You can now begin data collection. Pressing the capture button will trigger the acquisition of an image. A green LED will briefly illuminate to indicate that the image has been saved. At the same time, the terminal window will display the file name and image number (see Figure 3).

After each image is captured, rotate the screw wheel anticlockwise to push the sand upwards. The screw used in this project has a pitch of approximately 1.2 mm, meaning that one full rotation produces a slice of this thickness. A half rotation can be used to obtain thinner slices, effectively increasing the resolution of the 3D reconstruction along the slicing axis.

A 3.2 mm chisel is used to slice the sand. It should be passed firmly across the surface of the platform to produce a clean, uniform slice. The removed material can be collected in the sand container located on top of the platform.

3D IMAGE RECONSTRUCTION

Screenshot 2026-05-02 at 12.25.47.png
Screenshot 2026-04-26 at 20.37.51.png
Screenshot 2026-04-26 at 20.37.07.png
Screenshot 2026-04-24 at 13.23.35.png

The 3D reconstruction of the green sand sphere was performed using ImageJ, following the workflow described in my previous Magic Sand Slicer project and demonstrated at the end of the accompanying video.

The figure shows the STL model obtained from the segmented volume. The resolution of the reconstruction is relatively low, as the slices were acquired using a full rotation of the screw mechanism. Higher resolution can be achieved by reducing the step size (e.g., using half rotations). Additionally, one side of the reconstructed object appears truncated due to the loss of the final slices during the acquisition process.

Nevertheless, the results clearly demonstrate the capability of this device as a didactic tool for teaching the fundamental principles of 3D image reconstruction from serial sections.

In parallel, a Python-based pipeline for segmentation and volumetric reconstruction is under development (see Figures 2 and 3). This provides a programmable framework that complements established tools such as ImageJ and enables a more detailed exploration of the reconstruction procedure.


I hope you enjoyed this project and are inspired to reproduce it and explore the principles of 3D reconstruction through your own experiments.

Downloads