A FastAPI-based web application that provides a framework for video upload, processing, and streaming with real-time processing status updates.
- Video Processing Web Application
- Video upload with extension validation
- Real-time processing status updates via Server-Sent Events (SSE)
- Progress tracking with detailed step information
- Side-by-side video comparison (original vs processed)
- Background task processing
- Streaming video playback
- Clean separation of concerns with utility modules
- Health monitoring and automatic container recovery
- Configurable video processing pipeline
├── pyproject.toml # Project metadata and build configuration
├── setup.py # Development installation
├── requirements.txt # Production dependencies
├── requirements-dev.txt # Development dependencies
├── pytest.ini # Pytest configuration
├── setup.sh # Installation script
├── start.sh # Application startup script
├── main.py # Application entry point
├── app/
│ ├── core/ # Core application settings and config
│ │ └── settings.py # Application configuration
│ ├── routes/
│ │ ├── dashboard.py # Dashboard UI route
│ │ ├── upload.py # Video upload handling
│ │ ├── video.py # Video streaming and status
│ │ └── health.py # Health check endpoint
│ ├── templates/
│ │ └── dashboard.html # Main UI template
│ └── utils/
│ ├── files.py # File handling utilities
│ └── processing.py # Video processing framework
├── static/ # Static files directory
├── uploads/ # Original video storage
├── processed/ # Processed video storage
├── tests/ # Test suite directory
│ ├── conftest.py # Test configuration and fixtures
│ └── utils/ # Utility tests
├── deployments/ # Deployment configurations
│ └── infra/ # Infrastructure as code
├── .env # Environment configuration
└── sample.env # Example environment configuration
The easiest way to get started is using the setup script:
# Make the script executable
chmod +x setup.sh
# Run the setup script
./setup.sh
The setup script will:
- Create a Python virtual environment
- Install all required dependencies
- Create necessary directories (uploads, processed)
- Set up a default .env file
- Configure logging
If you prefer to set up manually, follow these steps:
-
Create a virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment variables in
.env
:HOST=0.0.0.0 PORT=8000 DEBUG=True ALLOWED_EXTENSIONS="mp4, avi, mov, mkv" UPLOAD_DIR="uploads" PROCESSED_DIR="processed"
-
Run the application:
# Make the start script executable chmod +x start.sh # Start the application ./start.sh # On Windows: # python main.py
For development, you might want to install additional tools:
# Install development dependencies
pip install -r requirements-dev.txt
# Set up pre-commit hooks
pre-commit install
# Install package in development mode
pip install -e .
# Run all tests
pytest
# Run with coverage report
pytest --cov=app
# Run specific test file
pytest tests/utils/test_files.py
# Run tests matching a pattern
pytest -k "test_processing"
For more testing information, see tests/README.md.
The application uses environment variables for configuration, which can be set in a .env
file. A sample.env
file is provided as a template:
# Server Configuration
HOST=0.0.0.0 # Server host address
PORT=8000 # Server port
DEBUG=True # Debug mode flag
# Video Processing
MAX_UPLOAD_SIZE=524288000 # Maximum upload size in bytes (500MB)
ALLOWED_EXTENSIONS="mp4, avi, mov, mkv" # Allowed video formats
UPLOAD_DIR="uploads" # Directory for uploaded videos
PROCESSED_DIR="processed" # Directory for processed videos
To get started:
- Copy
sample.env
to.env
:cp sample.env .env
- Adjust the values in
.env
according to your needs
The application uses Pydantic settings management through app/core/settings.py
for type-safe configuration handling. All environment variables are validated at startup.
- Start the development server:
uvicorn main:app --reload
- Run tests:
pytest
The application uses a processing framework that makes it easy to implement custom video processing logic while maintaining progress tracking.
- Create your processor function in a new file (e.g.,
app/utils/my_processor.py
):
async def my_video_processor(context: ProcessingContext) -> None:
"""
Custom video processing implementation.
Args:
context: ProcessingContext object providing access to paths and progress updates
"""
# Access input and output paths
input_video = context.input_path
output_video = context.output_path
# Example processing with progress updates
total_frames = get_total_frames(input_video)
for i, frame in enumerate(process_frames(input_video)):
# Process your frame here
processed_frame = apply_effects(frame)
save_frame(processed_frame, output_video)
# Update progress
progress = (i / total_frames) * 100
context.update_progress(
progress=progress,
current_step="Applying effects",
total_steps=1,
current_step_progress=progress
)
- Update the upload route to use your processor:
from app.utils.my_processor import my_video_processor
@router.post("/upload")
async def upload_video(
background_tasks: BackgroundTasks,
video: UploadFile = File(...)
) -> Dict[str, str]:
# ... existing code ...
background_tasks.add_task(process_video, video_id, my_video_processor)
# ... existing code ...
The ProcessingContext
class provides utilities for updating processing status:
context.update_progress(
progress=50.0, # Overall progress (0-100)
current_step="Step 1", # Current processing step name
total_steps=3, # Total number of steps
current_step_progress=75.0 # Progress of current step (0-100)
)
Progress updates are automatically sent to the frontend via SSE and displayed in:
- Progress bar
- Status text
- Step details
Videos can have the following states:
uploaded
: Initial state after uploadprocessing
: Currently being processedcompleted
: Processing finished successfullyerror
: Processing failed
The dashboard provides real-time updates using Server-Sent Events:
- Progress bar shows overall completion
- Status indicator shows current state
- Processing details show current step and progress
- Videos are displayed side by side when processing completes
The framework includes comprehensive error handling:
- File validation
- Processing errors
- Connection handling
- Cleanup of failed uploads
- Detailed error logging
POST /api/upload
: Upload a video file
Example request:
curl -X POST "http://localhost:8000/api/upload" \
-H "Content-Type: multipart/form-data" \
-F "video=@my_video.mp4"
Response:
{
"status": "success",
"message": "Video uploaded successfully and processing started",
"video_id": "550e8400-e29b-41d4-a716-446655440000",
"original_filename": "my_video.mp4",
"extension": "mp4"
}
GET /api/video/{video_id}
: Stream video (original or processed)GET /api/video/{video_id}/metadata
: Get video metadataGET /api/video/{video_id}/status/stream
: SSE endpoint for processing status
Example metadata response:
{
"status": "processing",
"original_filename": "my_video.mp4",
"upload_time": "2024-03-14T12:00:00",
"file_size": 1048576,
"extension": "mp4",
"processed": false,
"progress": 45.5,
"processing_details": {
"current_step": "Applying filters",
"total_steps": 3,
"current_step_progress": 75.0
}
}
GET /health
: Application health check
Response:
{
"status": "healthy"
}
# Server Configuration
HOST=0.0.0.0
PORT=8000
DEBUG=True
# Video Processing
ALLOWED_EXTENSIONS="mp4, avi, mov, mkv"
UPLOAD_DIR="uploads"
PROCESSED_DIR="processed"
- Fork and clone the repository
- Create a feature branch:
git checkout -b feature/my-new-feature
- Set up pre-commit hooks:
pip install pre-commit pre-commit install
from pathlib import Path
import cv2
import numpy as np
async def grayscale_processor(context: ProcessingContext) -> None:
"""Convert video to grayscale."""
cap = cv2.VideoCapture(str(context.input_path))
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = cap.get(cv2.CAP_PROP_FPS)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
out = cv2.VideoWriter(
str(context.output_path),
cv2.VideoWriter_fourcc(*'mp4v'),
fps,
(width, height),
isColor=False
)
try:
frame_count = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(gray)
# Update progress
frame_count += 1
progress = (frame_count / total_frames) * 100
context.update_progress(
progress=progress,
current_step="Converting to grayscale",
total_steps=1,
current_step_progress=progress
)
finally:
cap.release()
out.release()
async def multi_step_processor(context: ProcessingContext) -> None:
"""Process video with multiple steps."""
steps = [
("Resizing", resize_video),
("Applying filters", apply_filters),
("Adding watermark", add_watermark)
]
for step_idx, (step_name, step_func) in enumerate(steps, 1):
context.update_progress(
progress=(step_idx - 1) * (100 / len(steps)),
current_step=step_name,
total_steps=len(steps),
current_step_progress=0
)
await step_func(context, step_idx, len(steps))
-
Code Style
- Follow PEP 8
- Use type hints
- Include docstrings
- Write descriptive variable names
-
Testing
- Write unit tests for new features
- Test error handling
- Verify progress updates
- Check memory usage for large files
-
Performance
- Profile processing functions
- Consider memory usage
- Use async where appropriate
- Implement proper cleanup
-
Pull Requests
- Keep changes focused
- Include tests
- Update documentation
- Add implementation examples if needed
See Deployment Guide for detailed deployment instructions.