Skip to content

damusix/py-video-upload

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video Processing Web Application

A FastAPI-based web application that provides a framework for video upload, processing, and streaming with real-time processing status updates.

Features

  • Video upload with extension validation
  • Real-time processing status updates via Server-Sent Events (SSE)
  • Progress tracking with detailed step information
  • Side-by-side video comparison (original vs processed)
  • Background task processing
  • Streaming video playback
  • Clean separation of concerns with utility modules
  • Health monitoring and automatic container recovery
  • Configurable video processing pipeline

Project Structure

├── pyproject.toml        # Project metadata and build configuration
├── setup.py             # Development installation
├── requirements.txt     # Production dependencies
├── requirements-dev.txt # Development dependencies
├── pytest.ini          # Pytest configuration
├── setup.sh            # Installation script
├── start.sh            # Application startup script
├── main.py             # Application entry point
├── app/
│   ├── core/           # Core application settings and config
│   │   └── settings.py # Application configuration
│   ├── routes/
│   │   ├── dashboard.py    # Dashboard UI route
│   │   ├── upload.py       # Video upload handling
│   │   ├── video.py        # Video streaming and status
│   │   └── health.py       # Health check endpoint
│   ├── templates/
│   │   └── dashboard.html  # Main UI template
│   └── utils/
│       ├── files.py        # File handling utilities
│       └── processing.py   # Video processing framework
├── static/                 # Static files directory
├── uploads/               # Original video storage
├── processed/            # Processed video storage
├── tests/               # Test suite directory
│   ├── conftest.py     # Test configuration and fixtures
│   └── utils/          # Utility tests
├── deployments/         # Deployment configurations
│   └── infra/          # Infrastructure as code
├── .env                # Environment configuration
└── sample.env          # Example environment configuration

Getting Started

The easiest way to get started is using the setup script:

# Make the script executable
chmod +x setup.sh

# Run the setup script
./setup.sh

The setup script will:

  • Create a Python virtual environment
  • Install all required dependencies
  • Create necessary directories (uploads, processed)
  • Set up a default .env file
  • Configure logging

Manual Setup

If you prefer to set up manually, follow these steps:

  1. Create a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  2. Install dependencies:

    pip install -r requirements.txt
  3. Configure environment variables in .env:

    HOST=0.0.0.0
    PORT=8000
    DEBUG=True
    ALLOWED_EXTENSIONS="mp4, avi, mov, mkv"
    UPLOAD_DIR="uploads"
    PROCESSED_DIR="processed"
  4. Run the application:

    # Make the start script executable
    chmod +x start.sh
    
    # Start the application
    ./start.sh
    
    # On Windows:
    # python main.py

Development Setup

For development, you might want to install additional tools:

# Install development dependencies
pip install -r requirements-dev.txt

# Set up pre-commit hooks
pre-commit install

# Install package in development mode
pip install -e .

Running Tests

# Run all tests
pytest

# Run with coverage report
pytest --cov=app

# Run specific test file
pytest tests/utils/test_files.py

# Run tests matching a pattern
pytest -k "test_processing"

For more testing information, see tests/README.md.

Configuration

The application uses environment variables for configuration, which can be set in a .env file. A sample.env file is provided as a template:

# Server Configuration
HOST=0.0.0.0             # Server host address
PORT=8000                # Server port
DEBUG=True               # Debug mode flag

# Video Processing
MAX_UPLOAD_SIZE=524288000  # Maximum upload size in bytes (500MB)
ALLOWED_EXTENSIONS="mp4, avi, mov, mkv"  # Allowed video formats
UPLOAD_DIR="uploads"       # Directory for uploaded videos
PROCESSED_DIR="processed"  # Directory for processed videos

To get started:

  1. Copy sample.env to .env:
    cp sample.env .env
  2. Adjust the values in .env according to your needs

The application uses Pydantic settings management through app/core/settings.py for type-safe configuration handling. All environment variables are validated at startup.

Development

  1. Start the development server:
uvicorn main:app --reload
  1. Run tests:
pytest

Implementing Video Processing

The application uses a processing framework that makes it easy to implement custom video processing logic while maintaining progress tracking.

Creating a Custom Processor

  1. Create your processor function in a new file (e.g., app/utils/my_processor.py):
async def my_video_processor(context: ProcessingContext) -> None:
    """
    Custom video processing implementation.

    Args:
        context: ProcessingContext object providing access to paths and progress updates
    """
    # Access input and output paths
    input_video = context.input_path
    output_video = context.output_path

    # Example processing with progress updates
    total_frames = get_total_frames(input_video)

    for i, frame in enumerate(process_frames(input_video)):
        # Process your frame here
        processed_frame = apply_effects(frame)
        save_frame(processed_frame, output_video)

        # Update progress
        progress = (i / total_frames) * 100
        context.update_progress(
            progress=progress,
            current_step="Applying effects",
            total_steps=1,
            current_step_progress=progress
        )
  1. Update the upload route to use your processor:
from app.utils.my_processor import my_video_processor

@router.post("/upload")
async def upload_video(
    background_tasks: BackgroundTasks,
    video: UploadFile = File(...)
) -> Dict[str, str]:
    # ... existing code ...
    background_tasks.add_task(process_video, video_id, my_video_processor)
    # ... existing code ...

Progress Tracking

The ProcessingContext class provides utilities for updating processing status:

context.update_progress(
    progress=50.0,           # Overall progress (0-100)
    current_step="Step 1",   # Current processing step name
    total_steps=3,           # Total number of steps
    current_step_progress=75.0  # Progress of current step (0-100)
)

Progress updates are automatically sent to the frontend via SSE and displayed in:

  • Progress bar
  • Status text
  • Step details

Processing States

Videos can have the following states:

  • uploaded: Initial state after upload
  • processing: Currently being processed
  • completed: Processing finished successfully
  • error: Processing failed

Frontend Integration

The dashboard provides real-time updates using Server-Sent Events:

  • Progress bar shows overall completion
  • Status indicator shows current state
  • Processing details show current step and progress
  • Videos are displayed side by side when processing completes

Error Handling

The framework includes comprehensive error handling:

  • File validation
  • Processing errors
  • Connection handling
  • Cleanup of failed uploads
  • Detailed error logging

API Endpoints

Upload

  • POST /api/upload: Upload a video file

Example request:

curl -X POST "http://localhost:8000/api/upload" \
  -H "Content-Type: multipart/form-data" \
  -F "video=@my_video.mp4"

Response:

{
  "status": "success",
  "message": "Video uploaded successfully and processing started",
  "video_id": "550e8400-e29b-41d4-a716-446655440000",
  "original_filename": "my_video.mp4",
  "extension": "mp4"
}

Video

  • GET /api/video/{video_id}: Stream video (original or processed)
  • GET /api/video/{video_id}/metadata: Get video metadata
  • GET /api/video/{video_id}/status/stream: SSE endpoint for processing status

Example metadata response:

{
  "status": "processing",
  "original_filename": "my_video.mp4",
  "upload_time": "2024-03-14T12:00:00",
  "file_size": 1048576,
  "extension": "mp4",
  "processed": false,
  "progress": 45.5,
  "processing_details": {
    "current_step": "Applying filters",
    "total_steps": 3,
    "current_step_progress": 75.0
  }
}

Health

  • GET /health: Application health check

Response:

{
  "status": "healthy"
}

Environment Variables

# Server Configuration
HOST=0.0.0.0
PORT=8000
DEBUG=True

# Video Processing
ALLOWED_EXTENSIONS="mp4, avi, mov, mkv"
UPLOAD_DIR="uploads"
PROCESSED_DIR="processed"

Development Workflow

Setting Up Development Environment

  1. Fork and clone the repository
  2. Create a feature branch:
    git checkout -b feature/my-new-feature
  3. Set up pre-commit hooks:
    pip install pre-commit
    pre-commit install

Implementation Examples

Basic Video Processing

from pathlib import Path
import cv2
import numpy as np

async def grayscale_processor(context: ProcessingContext) -> None:
    """Convert video to grayscale."""
    cap = cv2.VideoCapture(str(context.input_path))
    total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
    fps = cap.get(cv2.CAP_PROP_FPS)
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    out = cv2.VideoWriter(
        str(context.output_path),
        cv2.VideoWriter_fourcc(*'mp4v'),
        fps,
        (width, height),
        isColor=False
    )

    try:
        frame_count = 0
        while cap.isOpened():
            ret, frame = cap.read()
            if not ret:
                break

            # Convert to grayscale
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            out.write(gray)

            # Update progress
            frame_count += 1
            progress = (frame_count / total_frames) * 100
            context.update_progress(
                progress=progress,
                current_step="Converting to grayscale",
                total_steps=1,
                current_step_progress=progress
            )

    finally:
        cap.release()
        out.release()

Multi-Step Processing

async def multi_step_processor(context: ProcessingContext) -> None:
    """Process video with multiple steps."""
    steps = [
        ("Resizing", resize_video),
        ("Applying filters", apply_filters),
        ("Adding watermark", add_watermark)
    ]

    for step_idx, (step_name, step_func) in enumerate(steps, 1):
        context.update_progress(
            progress=(step_idx - 1) * (100 / len(steps)),
            current_step=step_name,
            total_steps=len(steps),
            current_step_progress=0
        )

        await step_func(context, step_idx, len(steps))

Development Guidelines

  1. Code Style

    • Follow PEP 8
    • Use type hints
    • Include docstrings
    • Write descriptive variable names
  2. Testing

    • Write unit tests for new features
    • Test error handling
    • Verify progress updates
    • Check memory usage for large files
  3. Performance

    • Profile processing functions
    • Consider memory usage
    • Use async where appropriate
    • Implement proper cleanup
  4. Pull Requests

    • Keep changes focused
    • Include tests
    • Update documentation
    • Add implementation examples if needed

Deployment

See Deployment Guide for detailed deployment instructions.

About

Video Processing Web Application

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published