Skip to the content.

Custom Object Detection System

description coming soon (eta: September 2025)

Engineer School Area of Interest Grade
James Dai Lynbrook High School Computer Science Incoming Senior

James Dai

Second Milestone

For my second milestone, I was able to build my own model from a dataset that I found online. I also wrote my own script to run the model on both my pi and my personal computer, taking input from either my webcam, my phone’s camera, or my raspberry pi camera. This is good because now i can work on testing and training models at home, using my phone as the camera. My next milestone will include me training my own model with my own dataset, which I will create using computer parts I have at home. The current model I have only has 80% test accuracy, so I’m hoping that my own dataset will be able to reach 90-95%. Some challenges that I’ve faced so far was setting up the environment for the raspberry pi, especially since I wanted to run everything on a virtual environment on the pi, so I needed to change a lot of settings that I’ve never touched before. Also, it was difficult to find a large enough dataset with the objects I was looking for, so this also is why I want to switch to using my own dataset.

Script for the Pi:


from picamera2 import Picamera2
import cv2
import numpy as np
from tflite_runtime.interpreter import Interpreter
from PIL import Image
import time

# --- Load labels from file ---
def load_labels(label_path):
    with open(label_path, 'r') as f:
        return [line.strip() for line in f.readlines()]

# --- Set the input tensor for the interpreter ---
def set_input_tensor(interpreter, image):
    input_details = interpreter.get_input_details()[0]
    interpreter.set_tensor(input_details['index'], image)

# --- Run inference and return top result ---
def classify_image(interpreter, image):
    set_input_tensor(interpreter, image)
    interpreter.invoke()

    output_details = interpreter.get_output_details()[0]
    output = interpreter.get_tensor(output_details['index'])[0]

    top_result = np.argmax(output)
    return top_result, output[top_result]

# --- Setup paths ---
MODEL_PATH = "pc1Stuff/skibPC.tflite"
LABEL_PATH = "pc1Stuff/labels.txt"

# --- Load model and allocate tensors ---
interpreter = Interpreter(MODEL_PATH)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
_, height, width, _ = input_details[0]['shape']

# --- Load labels ---
labels = load_labels(LABEL_PATH)

# --- Initialize Picamera2 ---
picam2 = Picamera2()
picam2.preview_configuration.main.size = (800, 800)
picam2.preview_configuration.main.format = "RGB888"
picam2.configure("preview")
picam2.start()

# --- Main loop ---
print("Starting camera inference. Press 'q' to quit.")
while True:
    frame = picam2.capture_array()

    # Preprocess frame for model
    image = cv2.resize(frame, (width, height))
    # image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
    image = image.astype(np.float32) / 255.0
    image = np.expand_dims(image, axis=0)


    label_id, prob = classify_image(interpreter, image)
    label_text = f"{labels[label_id]} ({prob:.2f})"

    # Display result on image
    if prob > 0.7:

        cv2.putText(frame, f"{label_text}", (10, 30),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2)

    cv2.imshow("Picamera2 - TFLite Classification", frame)
    
    if cv2.waitKey(25) & 0xFF == ord('q'):
        break
    
    

cv2.destroyAllWindows()
picam2.stop()

pi

First Milestone

My project consists of a raspberry pi with a camera and display, intending to detect custom object. The display will show what the camera is currently watching, and also show what the model identifies the object that is currently detected as. I have assembled the raspberry pi, display, and camera, and have confirmed that the default model (made for generic objects) works. I have implemented a generic tensorflow model, which is able to detect objects such as water bottles, notebooks, and plastic bags. I will later build my own model using my own data for objects such as screws and fans. So far, I had a few challenges regarding using SSH to connect to the Raspberry PI, but figured out that it was because of the wifi I was connected to.

Bill of Materials

Part Note Price Link
CanaKit Raspberry Pi 4 4GB Starter PRO Kit - 4GB RAM everything needed to use raspberry pi, use raspberry pi to run models $119.99 Link
Adafruit BrainCraft HAT display + i/o for raspberry pi $44.99 Link
Raspberry Pi Camera Module 3 camera for input, to detect objects $29.99 Link

Starter Project: RGB Light

About the Project

For starter project, I started the project with a prebuilt board and identifiable parts. I then soldered on the correct parts to the correct location. As a final result, I was able to produce a board with a controllable rgb light, which was able to be controlled by three seperate sliders. I learned more about soldering, as well as more regarding power input in devices, as due to the low resistance, my board was unable to take power input from some variations of usbc cables.

rgb light

Materials List

PCB, LED total cost: 7.99 (Linked Here)

rgb light