MediaPipe Face Recognition Example

Register faces and recognize them in real-time from camera feed

How to Use

  1. Start Camera: Click the "Start Camera" button to access your webcam.
  2. Register Faces:
    • Position a person's face in front of the camera (ensure only one face is visible)
    • Click "Register Current Face"
    • Enter a name for the person
    • Click "Save" to store the face
    • Repeat for multiple people
  3. Recognition: Once faces are registered, the system will automatically detect and identify registered people in the camera feed.
  4. Visual Indicators:
    • Green box + name = Recognized person
    • Red box + "Unknown" = Unrecognized person
    • Cyan dots = Facial landmarks (478 points)

Technical Details

  • Face Detection: Uses MediaPipe's BlazeFace short-range detector
  • Landmark Extraction: Extracts 478 facial landmarks per face
  • Recognition Method: Compares Euclidean distance between landmark sets
  • Similarity Threshold: 85% (adjustable in the code)
  • Max Faces: Detects up to 5 faces simultaneously
  • Running Mode: Video streaming with GPU acceleration

Tips for Best Results

  • Ensure good lighting conditions
  • Face the camera directly during registration
  • Keep a consistent distance from the camera (1-3 feet)
  • Register multiple angles of the same person for better accuracy
  • Avoid registering with glasses if you'll be wearing contacts (or vice versa)

Limitations

  • This is a simplified face recognition system using landmark matching
  • Not suitable for security applications (use dedicated face recognition models)
  • Performance depends on lighting and camera quality
  • May struggle with extreme angles or partial occlusion
  • Registered faces are stored in browser memory (not persisted)

MediaPipe Face Recognition Demo

Status

Initializing...

Registered Faces (0)

No faces registered yet

Detected Faces

No faces detected

Code Structure

Key Components:

// Initialize MediaPipe Vision Tasks
const vision = await FilesetResolver.forVisionTasks(...)

// Create Face Detector (for bounding boxes)
faceDetector = await FaceDetector.createFromOptions(vision, {
  baseOptions: {
    modelAssetPath: 'blaze_face_short_range.tflite',
    delegate: 'GPU'
  },
  runningMode: 'VIDEO'
})

// Create Face Landmarker (for 478 facial landmarks)
faceLandmarker = await FaceLandmarker.createFromOptions(vision, {
  baseOptions: {
    modelAssetPath: 'face_landmarker.task',
    delegate: 'GPU'
  },
  runningMode: 'VIDEO',
  numFaces: 5
})

// Detect faces in video frame
const detectionResults = faceDetector.detectForVideo(video, timestamp)
const landmarkResults = faceLandmarker.detectForVideo(video, timestamp)

// Compare landmarks for recognition
function calculateLandmarkSimilarity(landmarks1, landmarks2) {
  // Calculate Euclidean distance between corresponding points
  // Convert distance to similarity score (0-1)
  return similarity
}

Built with MediaPipe Face Detector and MediaPipe Face Landmarker

Note: This demo runs entirely in the browser using WebAssembly and WebGPU.