top of page

Programming 103: Building Real-World Apps with AI

Price

1220/18 classes

Duration

1 hour 50 minutes

About the Course

Unit 1 — AI Web Apps: Build Your First Smart Interface

Summary

Students learn how to bring AI to life by building simple and interactive web apps. They explore Streamlit, connect to LLM APIs, and design clean user interfaces. This unit focuses on prompt engineering, helping students understand how to communicate effectively with AI models using system prompts, examples, constraints, and structured output.

Students will:

  • Build interactive AI-powered apps

  • Learn prompt engineering strategies

  • Understand system vs user prompts

  • Deploy a simple demo app


Unit 2 — LLM Agents: Teach AI to Think and Act

Summary

Students move from simple chatbots to rational LLM agents that can plan, decide, and take actions. They learn the difference between reflex rules and goal-oriented reasoning. Agents will read files, write files, make decisions, and produce JSON instructions the app follows—building the foundation of “AI that gets things done.”

Students will:

  • Build a rational agent loop (plan → act → reflect)

  • Design tools (functions) the agent can call

  • Handle invalid responses and add reliability

  • Create agents like homework planners or report generators

Sample Project: Homework planner


Unit 3 — Memory for AI: Make Your Apps Remember

In this unit, students explore how to give AI apps the ability to remember information from earlier interactions. Students learn the difference between short-term memory (keeping track of the current conversation or task) and simple saved memory (storing notes or preferences for later use). They design apps where the AI remembers what the user prefers, what tasks were done earlier, or what goals were set.

Students will:

  • Add short-term memory to their agents using conversation history

  • Build simple saved-memory features (e.g., notes, preferences, to-do lists)

  • Create apps that recall user information over time

  • Learn when AI should forget vs remember for safety and accuracy


Unit 4 — AI That Sees: Vision Apps with OpenCV & YOLO

Summary

Students learn how computers interpret the world through computer vision. They use OpenCV for transforming images and YOLO for detecting objects. Finally, they combine vision + LLM reasoning to build powerful real-world apps—from live object detectors to scene explainers.

Students will:

  • Capture and process images using OpenCV

  • Run YOLO object detection

  • Interpret results with an LLM

  • Build apps that understand the physical world


Example app: smart refrigerator to detect food and plan meals.



Your Instructor

Dr. Zhou

Location

bottom of page