DecayDock – AI Smart Fridge Companion

by ptallthings93 in Circuits > Electronics

308 Views, 2 Favorites, 0 Comments

DecayDock – AI Smart Fridge Companion

c99f48b0-39d2-4fc5-9740-7b52f005b301.png

Introduction

I created DecayDock, an AI-powered smart fridge companion built using an ESP32-CAM and TFT display that helps people reduce household food waste through food recognition and freshness tracking.

The idea came from a very common real-life problem I noticed at home. Many times, vegetables and leftovers were forgotten behind other containers inside the refrigerator until they spoiled. Even though the food was bought with good intention, busy schedules and lack of visibility caused unnecessary waste. I realized this is something many families, students, and working professionals experience daily.

While researching, I found that food waste is not only a household problem but also a global environmental issue. According to the UNEP Food Waste Index Report 2024, the world wasted around 1.05 billion tonnes of food in 2022, and households were responsible for nearly 60% of that waste. At the same time, around 783 million people globally faced hunger.

UNEP Food Waste Index Report 2024

Food waste also contributes heavily to climate change. Research from the Food and Agriculture Organization (FAO) states that food loss and waste generate approximately 8–10% of global greenhouse gas emissions, mainly because decomposing food in landfills releases methane gas.

FAO Food Waste and Climate Impact

This inspired me to build a simple and affordable system that could help people manage food more intelligently in everyday life. DecayDock uses Edge AI to recognize food items like vegetables, fruits, milk, and leftovers, then visually tracks freshness using color-based progress bars on a TFT display.

The goal of the project is not just technical innovation, but helping people build better habits, reduce waste, save money, and become more aware of food consumption in a practical and user-friendly way.


What This Project Does

DecayDock is an AI-powered smart fridge companion that helps users reduce household food waste by recognizing food items and tracking their freshness in real time.

Using an ESP32-CAM module and Edge AI, the system can identify common food items such as vegetables, fruits, milk, and leftovers when they are shown in front of the camera. After detection, the food item is automatically added to a simple inventory system displayed on the TFT screen.

The project then estimates the freshness of each item based on:

  1. food category
  2. storage duration
  3. expected shelf life

Freshness is displayed visually using color-based progress bars:

  1. Green → Fresh
  2. Yellow → Consume Soon
  3. Red → Expiring

This makes it easy for users to quickly understand which food should be consumed first before it gets wasted.

The project was created to solve a common real-life problem where food is forgotten inside refrigerators because of busy lifestyles and poor visibility. By making food tracking simple and visual, DecayDock encourages better food management habits, reduces unnecessary grocery waste, and promotes more sustainable living using affordable Edge AI technology.


This heavily inspired from circuit digest object detection : https://circuitdigest.com/microcontroller-projects/object-recognition-using-esp32-cam-and-edge-impulse (credit to : CircuitDigest)

Supplies

download.jpg
download.jpg
b9ve117yQ6IZO1XSMrDbyrqkl58StRrFZmA1YOLickdn-zQEH6baiFiiU-aItjdogihPbGbpwwHlSliOwkf9WpwDPy5kypJrq4QKa5E0ae7bt-9SdgErGyL0jdsonVxg6rkLiyh2unyBLmu0lr2PnG3HyX0gREidGAgRePmOCTk.jpg
download.jpg
download.jpg

Supplies

To keep the project compact, practical, and easy to install on a refrigerator, all the electronics were designed inside a custom magnetic enclosure. The enclosure contains the ESP32-CAM module, TFT display, power section, and supporting components in a clean and organized layout.

The magnetic mounting allows the device to attach directly to the fridge door without any permanent installation, making it feel more like a real consumer product instead of a temporary prototype.

Electronics & Components

Component Quantity Purpose

ESP32-CAM Module 1 Edge AI food recognition and processing

2.4” TFT Display 1 Display inventory and freshness UI

FTDI Programmer 1 Upload code to ESP32-CAM

Magnetic Enclosure 1 Compact fridge-mounted housing

Neodymium Magnets 2–4 Attach enclosure to refrigerator

Breadboard / PCB 1 Circuit connections

Jumper Wires Few Wiring connections

5V USB Power Module 1 Device power supply

Push Buttons (Optional) 2 Menu and reset controls

RGB LED (Optional) 1 Visual freshness indicator

Software & Platforms

Software Purpose

  1. Arduino IDE
  2. Programming and firmware upload
  3. Edge Impulse
  4. TinyML food recognition model
  5. TFT_eSPI Library
  6. TFT graphics and UI
  7. ESP32 Camera Library
  8. Camera interface and image capture

Tools Used

ToolPurpose

  1. Soldering Iron
  2. Permanent electrical connections
  3. Hot Glue Gun
  4. Component mounting
  5. Screwdriver Set
  6. Enclosure assembly
  7. Wire Cutter & Stripper
  8. Cable preparation
  9. Laptop / PC
  10. Programming and AI training
  11. 3D Printer (Optional)
  12. Custom enclosure design



Details

The enclosure was designed to be minimal, lightweight, and refrigerator-friendly. The front section contains:

  1. ESP32-CAM lens opening
  2. TFT touchscreen display
  3. status indicator area

while the internal section houses:

  1. ESP32-CAM module
  2. wiring
  3. power connections
  4. optional LEDs and buttons

Magnets are mounted behind the enclosure so the device can easily stick to the refrigerator surface without damaging it.

This design keeps the project:

  1. compact
  2. portable
  3. clean-looking
  4. practical for everyday use

and gives the prototype a more polished product-like appearance.

Understanding the System

Oybu02GiMc5NkKl9a-H3FtOxab6XaLh8DKnSV77KHF6V2AnlkO9CakI_zA88wp-Y5HfvU-Uh8eD1gcEgoM-OXM4i9mgjzroHStaV-1ULcj37S1Ffi6z0McRuuStA7N9z6f_QoXxv4IrRd_rbJxj7wAG25zmH5LTNtkHekdLWPxzATr_evQ8pt9P_pElX0550.jpg

Step 1: Understanding the System


Before building the hardware, I first planned how the complete system would work in a simple and practical way.

The main idea behind DecayDock was to create a compact smart fridge companion that could recognize food items and help reduce household food waste using Edge AI.

The system is built around the ESP32-CAM module because it combines:

  1. a microcontroller
  2. camera
  3. WiFi capability

in one small and affordable board.

Instead of using expensive cloud-based AI systems, the project uses Edge Impulse TinyML to run the food recognition model directly on the ESP32-CAM. This makes the system faster, lightweight, and suitable for embedded applications.

The complete workflow of the system is simple:

Working Flow

  1. The user places a food item in front of the camera.
  2. The ESP32-CAM captures the image.
  3. The TinyML model identifies the food item.
  4. The detected item is displayed on the TFT screen.
  5. A freshness percentage and color status are generated.
  6. The system visually reminds users which food should be consumed first.

The project was designed to solve a real-life problem where vegetables and leftovers are often forgotten inside refrigerators until they spoil. By making food visibility easier and more interactive, the system encourages better food management habits and sustainable living.

At this stage, I also finalized the hardware structure:

  1. ESP32-CAM for AI and image processing
  2. TFT display for inventory and freshness UI
  3. Magnetic enclosure for fridge mounting

Wiring It Up

5nbUdMy9pmYQ0NMQDehTY1Lh5WaWLPgmkz7Hy-XId_RbxrgTTK5AB0OmDHTQ6vu6OJhxKHff11BBU0Kn3BstjlxByXvFDqcEZhZ-5rWx_P6bXQyIsu19BeIba0J0b2lwNNTEU537bPa57-ikncYy0TaJUL5q9-bI0m5JQLxtf5Q.jpg
R0h3YIG7j97d7EEezKlO-YctN0vfmWsD2_Uaet257Ym1qLX7OVCNJ4F1kA6s-s8yd7FtIeoEQKYCCOunvv1i-so8tnu9V1GUHH3LPS2zvsdlOehBFFL6_1RPXwQTEbHZsqX5zF54qSGb8FAtippk2_NUHL2kQh_7upvyyxXoIEE.jpg


After understanding the system architecture, the next step was connecting the ESP32-CAM with the TFT display and preparing the hardware for AI food recognition.

The goal was to keep the circuit simple, compact, and reliable so it could easily fit inside the magnetic fridge enclosure.

The ESP32-CAM acts as the main controller for:

  1. camera image capture
  2. Edge AI processing
  3. display communication
  4. inventory logic

The TFT display is connected using SPI communication and is used to show:

  1. detected food name
  2. freshness percentage
  3. color freshness bars
  4. inventory information

Components Connected

  1. ESP32-CAM
  2. TFT Display
  3. FTDI Programmer
  4. Jumper wires
  5. 5V power supply

TFT Display Connections

TFT Display Pin ESP32-CAM Pin

VCC 3.3V

GND GND

SCK GPIO14

MOSI GPIO15

CS GPIO13

DC GPIO2

RST GPIO12

FTDI Programmer Connections

The FTDI programmer is used to upload code to the ESP32-CAM.

FTDI ESP32-CAM

5V 5V

GND GND

TX U0R

RX U0T

For uploading code:

  1. GPIO0 must be connected to GND
  2. Press the reset button after starting upload

Hardware Testing

After completing the wiring, I tested:

  1. camera initialization
  2. TFT display communication
  3. power stability
  4. serial communication

This step was important because stable wiring is necessary before deploying the Edge Impulse TinyML model.

Once the display and camera worked correctly, the hardware was ready for AI model integration and real-time food recognition.

Configuring TFT_eSPI


Image

Image

Image

Image

After wiring the TFT display, the next step was configuring the TFT_eSPI graphics library inside the Arduino IDE.

The TFT_eSPI library is used to:

  1. display food names
  2. draw freshness progress bars
  3. create the inventory interface
  4. render the smart fridge UI

This library is lightweight and optimized for ESP32-based displays, making it ideal for real-time embedded graphics.

Installing TFT_eSPI Library

Step 1

Open Arduino IDE.

Step 2

Go to:

Sketch → Include Library → Manage Libraries

Step 3

Search for:

TFT_eSPI

Install the library by Bodmer.

Editing User Setup File

To make the TFT display work correctly with the ESP32-CAM, the pin configuration inside TFT_eSPI must be edited.

Open:

Documents/Arduino/libraries/TFT_eSPI/User_Setup.h

Configure Display Driver

Uncomment the display driver according to your TFT module.

Example for ILI9341:

#define ILI9341_DRIVER

Configure SPI Pins

Set the SPI pins according to the project wiring:

#define TFT_MOSI 15
#define TFT_SCLK 14
#define TFT_CS 13
#define TFT_DC 2
#define TFT_RST 12

Set Display Resolution

Example:

#define TFT_WIDTH 240
#define TFT_HEIGHT 320

Testing the Display

After configuration, upload a simple graphics test sketch to check:

  1. text rendering
  2. colors
  3. screen refresh
  4. SPI communication

Example test code:

#include <TFT_eSPI.h>

TFT_eSPI tft = TFT_eSPI();

void setup() {

tft.init();
tft.setRotation(1);

tft.fillScreen(TFT_BLACK);

tft.setTextColor(TFT_GREEN);
tft.setTextSize(2);

tft.setCursor(20,40);
tft.println("DecayDock Ready");
}

void loop() {

}

Why This Step Is Important

Configuring TFT_eSPI correctly is important because the TFT display acts as the main user interface of DecayDock.

The display is responsible for showing:

  1. detected food items
  2. freshness status
  3. inventory list
  4. visual progress bars
  5. smart reminders


Setting Up Edge Impulse & Collecting the Dataset

Screenshot 2026-05-10 000346.png
7625a041-d20f-4057-8ebe-8be9cb16c5c2.png
73a420bb-d316-44f5-832e-713397f3cbf4.png
db7b2d0b-ffb5-4508-9786-9130b13efa85.png
ca725e4b-d8ae-47ed-8404-c2c4b7d7dc4c.png
843681c5-760a-4f7f-b40b-a8f01172c2e2.png
31577982-58ef-433a-9d8b-b727b176be3b.png


After completing the hardware setup, the next step was training the AI model using Edge Impulse.

This is the most important part of the project because the food recognition system completely depends on how well the dataset is collected and trained.

For DecayDock, I wanted the AI model to recognize common refrigerator food items such as:

  1. tomatoes
  2. onions
  3. bananas
  4. milk packets
  5. spinach
  6. leftovers

The complete workflow was inspired by ESP32-CAM object recognition projects using Edge Impulse, but I customized the process specifically for food inventory and freshness tracking applications. (Circuit Digest)

Why I Chose Edge Impulse

I used Edge Impulse because it makes TinyML development easier for embedded systems like ESP32-CAM.

It provides:

  1. image dataset management
  2. image labeling
  3. model training
  4. testing
  5. Arduino library deployment

all in one platform.

Another important reason was that Edge Impulse models can run directly on ESP32-CAM without requiring cloud AI processing. This makes the system:

  1. faster
  2. low power
  3. offline capable
  4. more practical for daily use

Creating the Edge Impulse Project

Step 1 — Create Account

Go to:

Edge Impulse Studio

Create an account and log in.


Step 2 — Create New Project

Click:

Create New Project

Project Name:

DecayDock Food Recognition

Project Type:

Image Classification



Preparing the Dataset

Instead of downloading random internet datasets, I collected my own images using the ESP32-CAM because I wanted the AI model to work in real refrigerator and kitchen conditions.

This improved:

  1. practical accuracy
  2. lighting adaptation
  3. real-world performance

Food Categories Used

To keep the model lightweight and optimized for ESP32-CAM, I trained only a few important food classes:

Food Item Images Collected

Tomato 50+

Onion 50+

Banana 45+

Milk Packet 40+

Spinach 40+

Leftovers 35+

Keeping fewer classes helped improve:

  1. detection speed
  2. memory usage
  3. model accuracy

Capturing Images

Image Collection Process

I used the ESP32-CAM to capture images from:

  1. different angles
  2. different distances
  3. multiple lighting conditions

I intentionally collected images:

  1. inside kitchen lighting
  2. near refrigerators
  3. with cluttered backgrounds

instead of clean studio conditions.

This helps the AI work better in actual daily environments.

Research and Edge Impulse community discussions also suggest that using real device images and multiple viewing angles improves TinyML accuracy significantly. (Edge Impulse Forum)

Important Tips I Followed

1. Fixed Camera Position

I kept the camera angle consistent during testing because stable positioning improves recognition reliability.

2. Plain Background for Initial Training

For early model training, I used a simple background to reduce false detections.

3. Good Lighting

Proper lighting helped improve image clarity and model learning.

4. Smaller Image Resolution

I used:

96 × 96

image size because smaller resolutions work better for embedded TinyML systems and reduce training time. (Medium)

Uploading the Dataset

After collecting images:

Step 1

Open:

Data Acquisition

inside Edge Impulse.

Step 2

Upload all collected images.

Split:

  1. 80% Training
  2. 20% Testing

Labeling Images

Next, I opened:

Labeling Queue

and manually labeled each image according to the food category.

Example:

  1. Tomato
  2. Onion
  3. Banana
  4. Milk

This step teaches the AI model how to identify different food items.

Creating the Impulse

After labeling:

Open:

Create Impulse

Settings used:

SettingValue

Image Size

96×96

Processing Block

Image

Learning Block

Object Detection

Training the AI Model

Inside:

Object Detection

I trained the TinyML model using Edge Impulse FOMO architecture because it is optimized for ESP32-CAM devices.

The model learns:

  1. shapes
  2. colors
  3. textures
  4. object outlines

to recognize food items.

Testing Model Accuracy

After training, I tested the model directly inside Edge Impulse.

The model successfully identified:

  1. tomatoes
  2. onions
  3. bananas
  4. milk packets

with good stability under normal indoor lighting.

Exporting the Arduino Library

After successful training:

Step 1

Open:

Deployment

Step 2

Select:

Arduino Library

Step 3

Download the generated ZIP library.

Installing the AI Library

Extract the downloaded ZIP file and move the generated Edge Impulse library into:

Documents → Arduino → Libraries

Restart Arduino IDE.

The AI model is now ready to run directly on the ESP32-CAM.

Why This Step Was Important

This step transformed DecayDock from a normal ESP32 camera project into an actual Edge AI system.

Instead of manually entering food items, the device can now:

  1. recognize food automatically
  2. process images locally
  3. work without cloud AI
  4. provide real-time smart inventory assistance

This is what makes the project:

  1. practical
  2. intelligent
  3. lightweight
  4. sustainability-focused

while still remaining affordable and maker-friendly. (Circuit Digest)

Testing and Demonstration

bfca647b-b0c4-4a27-9e01-4b401ad4b557.png

After successfully deploying the Edge Impulse model to the ESP32-CAM, the next step was testing the complete food recognition system in real-world conditions.

This was one of the most important stages because I wanted the project to work reliably inside an actual kitchen environment instead of only working under ideal lighting conditions.

The main goal during testing was to verify:

  1. food recognition accuracy
  2. display response
  3. real-time detection speed
  4. stability under different lighting conditions

Uploading the Final Code

The exported Edge Impulse Arduino library was integrated into the main ESP32-CAM firmware inside Arduino IDE.

The code handled:

  1. camera initialization
  2. TinyML inference
  3. food detection
  4. TFT display updates
  5. freshness bar rendering

After uploading the code, the ESP32-CAM started running the AI model directly on-device without cloud processing.

Real-Time Detection Testing

To test the system, I placed different food items in front of the camera one by one.

Examples tested:

  1. tomato
  2. onion
  3. banana
  4. milk packet
  5. spinach

The ESP32-CAM successfully identified the food item and displayed:

  1. item name
  2. freshness percentage
  3. color freshness status

on the TFT display.

Freshness UI Testing

The freshness system was tested using simulated storage durations.

Example:

  1. newly added tomato → green freshness bar
  2. older stored spinach → yellow warning bar
  3. expired milk → red status indicator

This helped create a simple and intuitive visual system that users can understand instantly.

Lighting Condition Testing

One challenge during testing was varying refrigerator and kitchen lighting.

The model performed best under:

  1. moderate indoor lighting
  2. stable camera positioning
  3. minimal reflections

To improve reliability, I:

  1. adjusted camera angle
  2. increased training image variety
  3. tested under different room conditions

This improved overall detection consistency.

Performance Observations

The system achieved:

  1. fast object detection
  2. smooth TFT updates
  3. stable Edge AI inference
  4. low hardware power consumption

Because the AI model runs directly on the ESP32-CAM, the project works offline without requiring internet connectivity.

Making UI at Display

75077061-5afe-4ab2-9c0f-c23b0b3da1b9.png

After testing the AI food recognition system, the next step was creating a clean and user-friendly interface for the TFT display.

The main goal of the UI was to make the system feel like a real smart appliance instead of just a hardware prototype. I wanted users to instantly understand:

  1. which food item was detected
  2. how fresh it is
  3. which food should be consumed first

using simple visual elements.

UI Design Concept

The interface was designed with a minimal and modern layout inspired by:

  1. smart kitchen devices
  2. IoT dashboards
  3. food delivery applications

The TFT screen displays:

  1. food image
  2. detected food name
  3. AI confidence score
  4. freshness percentage
  5. animated progress bar
  6. freshness color indicator

Example:

🥬 Spinach

Freshness: 72%

🟩🟩🟩🟩🟩🟨⬜⬜⬜⬜

This creates a much more interactive and understandable user experience compared to plain text output.

Software & Libraries Used

Software / LibraryPurpose

Arduino IDE

Main programming environment

TFT_eSPI

TFT graphics rendering

TJpg_Decoder

Display food images

SPI Library

SPI communication

Edge Impulse Library

AI food recognition

ESP32 Camera Library

Camera interface

Why I Used TFT_eSPI

The TFT_eSPI library was chosen because it is:

  1. lightweight
  2. fast
  3. optimized for ESP32
  4. ideal for embedded UI graphics

It allows:

  1. drawing shapes
  2. rendering text
  3. creating progress bars
  4. displaying images
  5. smooth screen updates

This helped make the interface look more polished and responsive.

Displaying Food Images

To improve the visual experience, I added small food images/icons on the TFT display.

Example:

  1. tomato image
  2. onion image
  3. banana image

The images were converted into:

JPEG format

and displayed using the:

TJpg_Decoder

library.

The images are stored inside ESP32 flash memory and loaded dynamically after food detection.

Creating the Freshness Progress Bar

The progress bar is one of the main UI elements of DecayDock.

The bar visually represents freshness condition:

Color Meaning

Green Fresh

Yellow Consume Soon

Red Expiring

The progress value decreases over time based on:

  1. food type
  2. estimated shelf life
  3. storage duration

This creates a very intuitive user experience because users can understand freshness instantly without reading detailed information.

UI Layout Structure

The final UI layout contains:

Top Section

  1. AI detected food name
  2. confidence score

Middle Section

  1. food image/icon

Bottom Section

  1. freshness percentage
  2. progress bar
  3. consume reminder

Example:

“Use spinach today.”

Designing the UI

The UI was designed directly using TFT graphics functions inside Arduino IDE.

Main functions used:

  1. fillScreen()
  2. drawRect()
  3. fillRect()
  4. drawBitmap()
  5. setCursor()
  6. print()

These functions helped create:

  1. boxes
  2. labels
  3. progress bars
  4. image placeholders
  5. animated indicators

Example UI Code


#include <TFT_eSPI.h>

TFT_eSPI tft = TFT_eSPI();

String food = "Onion";
int freshness = 82;

void setup() {

tft.init();
tft.setRotation(1);

tft.fillScreen(TFT_BLACK);

// Food name
tft.setTextColor(TFT_WHITE);
tft.setTextSize(2);
tft.setCursor(20,20);
tft.println(food);

// Freshness text
tft.setCursor(20,220);
tft.print("Freshness: ");
tft.print(freshness);
tft.println("%");

// Progress bar outline
tft.drawRect(20,250,200,20,TFT_WHITE);

// Progress fill
tft.fillRect(20,250,freshness*2,20,TFT_GREEN);
}

void loop() {

}

Real-Time UI Updates

Whenever the AI model detects a new food item:

  1. the previous screen clears
  2. new food image loads
  3. progress bar updates
  4. freshness value changes

This creates a smooth smart-device style experience.

Why This Step Was Important

The UI transformed the project from:

a basic AI detection demo

into:

a practical smart kitchen product prototype.

Instead of showing complicated technical outputs, the system communicates information using:

  1. images
  2. colors
  3. progress bars
  4. simple reminders

which makes the device:

  1. easier to use
  2. visually appealing
  3. beginner-friendly
  4. more realistic for everyday users


Assembly at Encloser

vp4lgnGnZQSXggKh4SbXwpoE7VZwBgcL0QCVQa80z9fuFBg0OAKwJBJaCf3o5t9oMNSNGzp2oiBd2UUTY5kDLeRGTzrIwmBO05RaQJDasJ892KjBFLhnvKCspNI3PkQ9a1VO-xYK4o2Gn06SeErmBq8XypbUaiLfRxlnADIBa_Ws_oHuIT2N9YO9z1XbxoHj.jpg
Y3TslyZo_ZWtPvHJwNhi58mDVNzj5auK5ZgSBbBfC6Te91hYkN3ln9dlD5a6ds7U1kXTA5obx3Clx6ge4nhsGoau8qSPudjGjnaOTuLNWe6l2c1oOVYumCj-TUGWKJdZ0MoLMlEcHm5xVcqJKIcEsvcxQvwaO_BdOiWDhDOF36ORFaoJ0O6-r6fxVDyysVw_.jpg
c99f48b0-39d2-4fc5-9740-7b52f005b301.png

After completing the hardware testing and TFT interface design, the final step was assembling all the components inside a compact magnetic enclosure.

The main goal during assembly was to make the project look and feel like a real smart home product instead of a temporary breadboard prototype.

I wanted the device to:

  1. mount easily on a refrigerator
  2. remain compact and lightweight
  3. protect the electronics
  4. keep wiring organized
  5. improve overall presentation quality

Enclosure Design

The enclosure was designed as a small rectangular fridge-mounted module with:

  1. front camera opening
  2. TFT display cutout
  3. internal space for ESP32-CAM
  4. cable management section
  5. rear magnetic mounting support

The design keeps the front side clean while hiding most wiring and electronics internally.

Components Mounted Inside

The following components were fixed inside the enclosure:

ComponentPlacement

ESP32-CAM

Rear internal section

TFT Display

Front display opening

Wiring Connections

Side cable channels

USB Power Cable

Bottom exit slot

Magnets

Rear panel

Mounting the TFT Display

The TFT display was aligned carefully with the front display window so the interface remained clearly visible.

To secure the display:

  1. hot glue
  2. double-sided foam tape
  3. small mounting supports

were used.

I intentionally left a slight bezel around the screen to give it a more realistic consumer-device appearance.

Positioning the ESP32-CAM

The ESP32-CAM was mounted behind the front panel with the camera aligned through a circular camera opening.

This positioning helped:

  1. improve image capture angle
  2. protect the lens
  3. reduce visible wiring
  4. maintain a cleaner design

The camera angle was adjusted slightly downward because most food items would be scanned from below during testing.

Magnetic Fridge Mount

To make installation simple and user-friendly, strong neodymium magnets were attached behind the enclosure.

This allowed the device to:

  1. stick directly to the refrigerator
  2. move easily when needed
  3. avoid drilling or permanent installation

The magnetic mounting system also made the prototype feel more like an actual smart kitchen accessory.

Cable Management

During assembly, special attention was given to cable management because exposed wires can make prototypes look unfinished.

To improve the appearance:

  1. wires were shortened
  2. cable ties were added
  3. internal routing was organized
  4. extra jumper wires were removed

This gave the project a cleaner and more professional hardware-maker look.

Final Power Setup

The system is powered using:

5V USB power

through the ESP32-CAM module.

The USB cable exits through a small slot at the bottom of the enclosure to keep the front side minimal and uncluttered.

Final Testing After Assembly

After enclosure assembly, I performed one final system test to verify:

  1. camera visibility
  2. TFT display readability
  3. stable power connection
  4. food recognition performance
  5. enclosure heat management

The device successfully operated while mounted vertically on a refrigerator surface.

Code

/*
*ESP32 cam object detection code
* by circuirdigest on 27-June-2024
*/
#include <xxxx_inferencing.h> // modify with your project title, Replace the xxxx
#include "edge-impulse-sdk/dsp/image/image.hpp"
#include "esp_camera.h"
// Select camera model - find more camera models in camera_pins.h file here
// https://github.com/espressif/arduino-esp32/blob/master/libraries/ESP32/…
// #define CAMERA_MODEL_ESP_EYE // Has PSRAM
#define CAMERA_MODEL_AI_THINKER // Has PSRAM
#if defined(CAMERA_MODEL_ESP_EYE)
#define PWDN_GPIO_NUM -1
#define RESET_GPIO_NUM -1
#define XCLK_GPIO_NUM 4
#define SIOD_GPIO_NUM 18
#define SIOC_GPIO_NUM 23
#define Y9_GPIO_NUM 36
#define Y8_GPIO_NUM 37
#define Y7_GPIO_NUM 38
#define Y6_GPIO_NUM 39
#define Y5_GPIO_NUM 35
#define Y4_GPIO_NUM 14
#define Y3_GPIO_NUM 13
#define Y2_GPIO_NUM 34
#define VSYNC_GPIO_NUM 5
#define HREF_GPIO_NUM 27
#define PCLK_GPIO_NUM 25
#elif defined(CAMERA_MODEL_AI_THINKER)
#define PWDN_GPIO_NUM 32
#define RESET_GPIO_NUM -1
#define XCLK_GPIO_NUM 0
#define SIOD_GPIO_NUM 26
#define SIOC_GPIO_NUM 27
#define Y9_GPIO_NUM 35
#define Y8_GPIO_NUM 34
#define Y7_GPIO_NUM 39
#define Y6_GPIO_NUM 36
#define Y5_GPIO_NUM 21
#define Y4_GPIO_NUM 19
#define Y3_GPIO_NUM 18
#define Y2_GPIO_NUM 5
#define VSYNC_GPIO_NUM 25
#define HREF_GPIO_NUM 23
#define PCLK_GPIO_NUM 22
#else
#error "Camera model not selected"
#endif
/* Constant defines -------------------------------------------------------- */
#define EI_CAMERA_RAW_FRAME_BUFFER_COLS 320
#define EI_CAMERA_RAW_FRAME_BUFFER_ROWS 240
#define EI_CAMERA_FRAME_BYTE_SIZE 3
#include <Wire.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>
// ESP32-CAM doesn't have dedicated i2c pins, so we define our own. Let's choose 15 and 14
#define I2C_SDA 15
#define I2C_SCL 14
TwoWire I2Cbus = TwoWire(0);
// Display defines
#define SCREEN_WIDTH 128
#define SCREEN_HEIGHT 64
#define OLED_RESET -1
#define SCREEN_ADDRESS 0x3C
Adafruit_SSD1306 display(SCREEN_WIDTH, SCREEN_HEIGHT, &I2Cbus, OLED_RESET);
/* Private variables ------------------------------------------------------- */
static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal
static bool is_initialised = false;
uint8_t *snapshot_buf; //points to the output of the capture
static camera_config_t camera_config = {
.pin_pwdn = PWDN_GPIO_NUM,
.pin_reset = RESET_GPIO_NUM,
.pin_xclk = XCLK_GPIO_NUM,
.pin_sscb_sda = SIOD_GPIO_NUM,
.pin_sscb_scl = SIOC_GPIO_NUM,
.pin_d7 = Y9_GPIO_NUM,
.pin_d6 = Y8_GPIO_NUM,
.pin_d5 = Y7_GPIO_NUM,
.pin_d4 = Y6_GPIO_NUM,
.pin_d3 = Y5_GPIO_NUM,
.pin_d2 = Y4_GPIO_NUM,
.pin_d1 = Y3_GPIO_NUM,
.pin_d0 = Y2_GPIO_NUM,
.pin_vsync = VSYNC_GPIO_NUM,
.pin_href = HREF_GPIO_NUM,
.pin_pclk = PCLK_GPIO_NUM,
//XCLK 20MHz or 10MHz for OV2640 double FPS (Experimental)
.xclk_freq_hz = 20000000,
.ledc_timer = LEDC_TIMER_0,
.ledc_channel = LEDC_CHANNEL_0,
.pixel_format = PIXFORMAT_JPEG, //YUV422,GRAYSCALE,RGB565,JPEG
.frame_size = FRAMESIZE_QVGA, //QQVGA-UXGA Do not use sizes above QVGA when not JPEG
.jpeg_quality = 12, //0-63 lower number means higher quality
.fb_count = 1, //if more than one, i2s runs in continuous mode. Use only with JPEG
.fb_location = CAMERA_FB_IN_PSRAM,
.grab_mode = CAMERA_GRAB_WHEN_EMPTY,
};
/* Function definitions ------------------------------------------------------- */
bool ei_camera_init(void);
void ei_camera_deinit(void);
bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf);
/**
* @brief Arduino setup function
*/
void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
// Initialize I2C with our defined pins
I2Cbus.begin(I2C_SDA, I2C_SCL, 100000);
// SSD1306_SWITCHCAPVCC = generate display voltage from 3.3V internally
if (!display.begin(SSD1306_SWITCHCAPVCC, SCREEN_ADDRESS)) {
Serial.printf("SSD1306 OLED display failed to initalize.\nCheck that display SDA is connected to pin %d and SCL connected to pin %d\n", I2C_SDA, I2C_SCL);
while (true)
;
}
//comment out the below line to start inference immediately after upload
while (!Serial)
;
Serial.println("Edge Impulse Inferencing Demo");
if (ei_camera_init() == false) {
ei_printf("Failed to initialize Camera!\r\n");
} else {
ei_printf("Camera initialized\r\n");
}
ei_printf("\nStarting continious inference in 2 seconds...\n");
display.clearDisplay();
display.setCursor(0, 0);
display.setTextSize(1);
display.setTextColor(SSD1306_WHITE);
display.print("Starting continious\n inference in\n 2 seconds...");
display.display();
ei_sleep(2000);
display.clearDisplay();
}
/**
* @brief Get data and run inferencing
*
* @param[in] debug Get debug info if true
*/
void loop() {
display.clearDisplay();
// instead of wait_ms, we'll wait on the signal, this allows threads to cancel us...
if (ei_sleep(5) != EI_IMPULSE_OK) {
return;
}
snapshot_buf = (uint8_t *)malloc(EI_CAMERA_RAW_FRAME_BUFFER_COLS * EI_CAMERA_RAW_FRAME_BUFFER_ROWS * EI_CAMERA_FRAME_BYTE_SIZE);
// check if allocation was successful
if (snapshot_buf == nullptr) {
ei_printf("ERR: Failed to allocate snapshot buffer!\n");
return;
}
ei::signal_t signal;
signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT;
signal.get_data = &ei_camera_get_data;
if (ei_camera_capture((size_t)EI_CLASSIFIER_INPUT_WIDTH, (size_t)EI_CLASSIFIER_INPUT_HEIGHT, snapshot_buf) == false) {
ei_printf("Failed to capture image\r\n");
free(snapshot_buf);
return;
}
// Run the classifier
ei_impulse_result_t result = { 0 };
EI_IMPULSE_ERROR err = run_classifier(&signal, &result, debug_nn);
if (err != EI_IMPULSE_OK) {
ei_printf("ERR: Failed to run classifier (%d)\n", err);
return;
}
// print the predictions
ei_printf("Predictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
result.timing.dsp, result.timing.classification, result.timing.anomaly);
#if EI_CLASSIFIER_OBJECT_DETECTION == 1
bool bb_found = result.bounding_boxes[0].value > 0;
for (size_t ix = 0; ix < result.bounding_boxes_count; ix++) {
auto bb = result.bounding_boxes[ix];
if (bb.value == 0) {
continue;
}
ei_printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\n", bb.label, bb.value, bb.x, bb.y, bb.width, bb.height);
display.setCursor(0, 20 * ix);
display.setTextSize(2);
display.setTextColor(SSD1306_WHITE);
display.print(bb.label);
display.print("-");
display.print(int((bb.value)*100));
display.print("%");
display.display();
}
if (!bb_found) {
ei_printf(" No objects found\n");
display.setCursor(0, 16);
display.setTextSize(2);
display.setTextColor(SSD1306_WHITE);
display.print("No objects found");
display.display();
}
#else
for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
ei_printf(" %s: %.5f\n", result.classification[ix].label,
result.classification[ix].value);
}
#endif
#if EI_CLASSIFIER_HAS_ANOMALY == 1
ei_printf(" anomaly score: %.3f\n", result.anomaly);
#endif
free(snapshot_buf);
}
/**
* @brief Setup image sensor & start streaming
*
* @retval false if initialisation failed
*/
bool ei_camera_init(void) {
if (is_initialised) return true;
#if defined(CAMERA_MODEL_ESP_EYE)
pinMode(13, INPUT_PULLUP);
pinMode(14, INPUT_PULLUP);
#endif
//initialize the camera
esp_err_t err = esp_camera_init(&camera_config);
if (err != ESP_OK) {
Serial.printf("Camera init failed with error 0x%x\n", err);
return false;
}
sensor_t *s = esp_camera_sensor_get();
// initial sensors are flipped vertically and colors are a bit saturated
if (s->id.PID == OV3660_PID) {
s->set_vflip(s, 1); // flip it back
s->set_brightness(s, 1); // up the brightness just a bit
s->set_saturation(s, 0); // lower the saturation
}
#if defined(CAMERA_MODEL_M5STACK_WIDE)
s->set_vflip(s, 1);
s->set_hmirror(s, 1);
#elif defined(CAMERA_MODEL_ESP_EYE)
s->set_vflip(s, 1);
s->set_hmirror(s, 1);
s->set_awb_gain(s, 1);
#endif
is_initialised = true;
return true;
}
/**
* @brief Stop streaming of sensor data
*/
void ei_camera_deinit(void) {
//deinitialize the camera
esp_err_t err = esp_camera_deinit();
if (err != ESP_OK) {
ei_printf("Camera deinit failed\n");
return;
}
is_initialised = false;
return;
}
/**
* @brief Capture, rescale and crop image
*
* @param[in] img_width width of output image
* @param[in] img_height height of output image
* @param[in] out_buf pointer to store output image, NULL may be used
* if ei_camera_frame_buffer is to be used for capture and resize/cropping.
*
* @retval false if not initialised, image captured, rescaled or cropped failed
*
*/
bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf) {
bool do_resize = false;
if (!is_initialised) {
ei_printf("ERR: Camera is not initialized\r\n");
return false;
}
camera_fb_t *fb = esp_camera_fb_get();
if (!fb) {
ei_printf("Camera capture failed\n");
return false;
}
bool converted = fmt2rgb888(fb->buf, fb->len, PIXFORMAT_JPEG, snapshot_buf);
esp_camera_fb_return(fb);
if (!converted) {
ei_printf("Conversion failed\n");
return false;
}
if ((img_width != EI_CAMERA_RAW_FRAME_BUFFER_COLS)
|| (img_height != EI_CAMERA_RAW_FRAME_BUFFER_ROWS)) {
do_resize = true;
}
if (do_resize) {
ei::image::processing::crop_and_interpolate_rgb888(
out_buf,
EI_CAMERA_RAW_FRAME_BUFFER_COLS,
EI_CAMERA_RAW_FRAME_BUFFER_ROWS,
out_buf,
img_width,
img_height);
}
return true;
}
static int ei_camera_get_data(size_t offset, size_t length, float *out_ptr) {
// we already have a RGB888 buffer, so recalculate offset into pixel index
size_t pixel_ix = offset * 3;
size_t pixels_left = length;
size_t out_ptr_ix = 0;
while (pixels_left != 0) {
// Swap BGR to RGB here
// due to https://github.com/espressif/esp32-camera/issues/379
out_ptr[out_ptr_ix] = (snapshot_buf[pixel_ix + 2] << 16) + (snapshot_buf[pixel_ix + 1] << 8) + snapshot_buf[pixel_ix];
// go to the next pixel
out_ptr_ix++;
pixel_ix += 3;
pixels_left--;
}
// and done!
return 0;
}
#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_CAMERA
#error "Invalid model for current sensor"
#endif

Conclusion


Building DecayDock was much more than just making another electronics project. It started from a very simple real-life observation inside my home seeing vegetables and leftovers getting wasted because they were forgotten inside the refrigerator. At first, it looked like a small daily habit, but while researching, I realized how strongly household food waste is connected to larger global issues like hunger, climate change, methane emissions, and unnecessary resource consumption.

What made this journey special for me was the process of turning a small idea into a working Edge AI product using affordable maker hardware. From collecting food datasets using the ESP32-CAM, training TinyML models in Edge Impulse, debugging wiring problems, designing the TFT interface, and finally assembling everything inside the magnetic enclosure — every stage taught me something new.

There were many moments where things did not work properly:

  1. wrong detections
  2. unstable wiring
  3. display issues
  4. lighting problems during testing

but solving those challenges was also the most enjoyable part of the project. Watching the system finally recognize a tomato or onion in real time on the TFT screen genuinely felt exciting because it transformed the project from just code and circuits into something interactive and meaningful.

One thing I learned during this project is that sustainability does not always require large industrial systems or expensive technology. Sometimes even a small device placed on a refrigerator can help people become more aware of their daily habits and reduce waste little by little.

DecayDock represents the idea that technology should not only be smart, but also responsible and human-centered. The project combines:

  1. embedded AI
  2. sustainability
  3. everyday usability
  4. simple human behavior

into one practical solution.

Most importantly, this project reminded me why I enjoy hardware making so much — the ability to take a real-world problem, experiment creatively, learn through failures, and finally build something that could genuinely help people in daily life.