{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "# ![upcode](images/front.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<h1><center>Why Self-Driving Cars?</center></h1>\n",
    "<br>\n",
    "<center><img src=\"images/autonomous_car.png\" alt=\"Autonomous Car\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/blue_car.png\" alt=\"Blue Car\"></center>\n",
    "\n",
    "# Cars are a good thing\n",
    "\n",
    "<center><img src=\"images/chuck.gif\" alt=\"Cars are good\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Cars helps us move\n",
    "# 1 billion cars worldwide\n",
    "# Transportation\n",
    "# Shape the cities, roads, world..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/manila.png\" alt=\"Manila Traffic Jam\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<h3>\n",
    "<ul>\n",
    "  <li>Filipinos spend 16 days a year on traffic</li>\n",
    "  <ul>\n",
    "    <li>Loosing 2,663 sgd inconme</li>\n",
    "  </ul>\n",
    "  <li>Comuters in Singapore spend 30 mins in traffic</li>\n",
    "  <ul>\n",
    "    <li>19 looking for parking spot</li>\n",
    "  </ul>\n",
    "</ul>\n",
    "</h3>\n",
    "<br>\n",
    "<br>\n",
    "<h4><div align=\"right\">source: straitstimes.com</div></h4>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/bkk.jpg\" alt=\"Bkk Traffic Jam\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<h3>\n",
    "<ul>\n",
    "  <li>24,000 deaths anually in Thailand</li>\n",
    "  <ul>\n",
    "    <li>Loses 3 to 5 % of its GDP</li>\n",
    "  </ul>\n",
    "  <li>10,000 injured each year in SG</li>\n",
    "  <ul>\n",
    "    <li>160 people die</li>\n",
    "  </ul>\n",
    "</ul>\n",
    "</h3>\n",
    "<br>\n",
    "<br>\n",
    "<h4><div align=\"right\">sources: World Health Organization and police.gov.sg</div></h4>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Road fatality one of the top 10 global causes of deaths"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<h1><center>But... How?</center></h1>\n",
    "<center><img src=\"images/wtf.png\" alt=\"Wtf?\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/selfdrivingcar.png\" alt=\"Self Driving Car\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<h1><center>Why Computer Vision?</center></h1>\n",
    "<br>\n",
    "<center><img src=\"images/computer_vision.gif\" alt=\"Computer Vision\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/cameras.png\" alt=\"Cameras\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<h3>\n",
    "<ul>\n",
    "  <li>Cisco Visual Networking Index:</li>\n",
    "  <ul>\n",
    "    <li>5 million years to watch one month - 2021</li>\n",
    "    <li>82% of global traffic</li>\n",
    "    <li>Video surveillance traffic increased 72%</li>\n",
    "    <li>Virtual and augmented reality traffic will increase 20 times</li>\n",
    "  </ul>\n",
    "</ul>\n",
    "</h3>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/youtube.png\" alt=\"Youtube\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Computer vision engineer: \n",
    "<br>\n",
    "\n",
    "## IT job that will be most in demand in 2020 *\n",
    "## More than 190k sgd salary in the US **\n",
    "\n",
    "<br>\n",
    "<h4><div align=\"right\">*source: techproresearch.com</div></h4>\n",
    "<h4><div align=\"right\">**source: indeed.com</div></h4>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# First rule of presentations:\n",
    "# Don't demo!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<h1><center>Let's demo!</center></h1>\n",
    "<center><img src=\"images/homer.gif\" alt=\"Crazy Homer\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center> <video controls src=\"data/singapore_drive.mp4\" /> </center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "code_folding": [],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "import cv2\n",
    "import os\n",
    "import numpy as np\n",
    "from moviepy.editor import VideoFileClip\n",
    "from collections import deque\n",
    "%matplotlib inline\n",
    "%config InlineBackend.figure_format = 'retina'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "code_folding": [
     0,
     2,
     10,
     12,
     14,
     16,
     23,
     33,
     35,
     59,
     71,
     81
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def convert_to_hls(image):\n",
    "    return cv2.cvtColor(image, cv2.COLOR_RGB2HLS)\n",
    "def select_white(image):\n",
    "    converted = convert_to_hls(image)\n",
    "    # white color mask\n",
    "    lower = np.uint8([  0, 110,   0])\n",
    "    upper = np.uint8([255, 180, 250])\n",
    "    white_mask = cv2.inRange(converted, lower, upper)\n",
    "    masked = cv2.bitwise_and(image, image, mask = white_mask)\n",
    "    return masked\n",
    "def convert_to_grayscale(image):\n",
    "    return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n",
    "def smoothing(image, kernel_size=15):\n",
    "    return cv2.GaussianBlur(image, (kernel_size, kernel_size), 0)\n",
    "def detect_edges(image, low_threshold=10, high_threshold=100):\n",
    "    return cv2.Canny(image, low_threshold, high_threshold)\n",
    "def filter_region(image, vertices):\n",
    "    mask = np.zeros_like(image)\n",
    "    if len(mask.shape)==2:\n",
    "        cv2.fillPoly(mask, vertices, 255)\n",
    "    else:\n",
    "        cv2.fillPoly(mask, vertices, (255,)*mask.shape[2])\n",
    "    return cv2.bitwise_and(image, mask)\n",
    "def select_region(image):\n",
    "    rows, cols = image.shape[:2]\n",
    "    bottom_left  = [int(cols*0.2), int(rows*0.95)]\n",
    "    top_left     = [int(cols*0.5), int(rows*0.65)]\n",
    "    bottom_right = [int(cols*0.95), int(rows*0.95)]\n",
    "    top_right    = [int(cols*0.65), int(rows*0.65)] \n",
    "    \n",
    "    vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32)\n",
    "    \n",
    "    return filter_region(image, vertices)\n",
    "def hough_lines(image):\n",
    "    return cv2.HoughLinesP(image, rho=1, theta=np.pi/180, threshold=5, minLineLength=5, maxLineGap=200)\n",
    "def average_slope_intercept(lines):\n",
    "    left_lines    = [] \n",
    "    left_weights  = [] \n",
    "    right_lines   = [] \n",
    "    right_weights = []\n",
    "    \n",
    "    for line in lines:\n",
    "        for x1, y1, x2, y2 in line:\n",
    "            if x2==x1:\n",
    "                continue\n",
    "            slope = (y2-y1)/(x2-x1)\n",
    "            intercept = y1 - slope*x1\n",
    "            length = np.sqrt((y2-y1)**2+(x2-x1)**2)\n",
    "            if slope < 0:\n",
    "                left_lines.append((slope, intercept))\n",
    "                left_weights.append((length))\n",
    "            else:\n",
    "                right_lines.append((slope, intercept))\n",
    "                right_weights.append((length))\n",
    "     \n",
    "    left_lane  = np.dot(left_weights,  left_lines) /np.sum(left_weights)  if len(left_weights) >0 else None\n",
    "    right_lane = np.dot(right_weights, right_lines)/np.sum(right_weights) if len(right_weights)>0 else None\n",
    "    \n",
    "    return left_lane, right_lane\n",
    "def calculate_line_points(y1, y2, line):\n",
    "    if line is None:\n",
    "        return None\n",
    "    \n",
    "    slope, intercept = line\n",
    "    \n",
    "    x1 = int((y1 - intercept)/slope)\n",
    "    x2 = int((y2 - intercept)/slope)\n",
    "    y1 = int(y1)\n",
    "    y2 = int(y2)\n",
    "    \n",
    "    return ((x1, y1), (x2, y2))\n",
    "def lane_lines(image, lines):\n",
    "    left_lane, right_lane = average_slope_intercept(lines)\n",
    "    \n",
    "    y1 = image.shape[0]\n",
    "    y2 = y1*0.7         \n",
    "\n",
    "    left_line  = calculate_line_points(y1, y2, left_lane)\n",
    "    right_line = calculate_line_points(y1, y2, right_lane)\n",
    "    \n",
    "    return left_line, right_line\n",
    "def draw_lane_lines(image, lines, color=[0, 0, 255], thickness=20):\n",
    "    \n",
    "    line_image = np.zeros_like(image)\n",
    "    for line in lines:\n",
    "        if line is not None:\n",
    "            cv2.line(line_image, *line,  color, thickness)\n",
    "    return cv2.addWeighted(image, 1.0, line_image, 0.9, 0.0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "class LanesDetector:\n",
    "    def __init__(self):\n",
    "        self.left_lines  = deque(maxlen=50)\n",
    "        self.right_lines = deque(maxlen=50)\n",
    "        \n",
    "    def mean_line(self, line, lines):\n",
    "        if line is not None:\n",
    "            lines.append(line)\n",
    "\n",
    "        if len(lines)>0:\n",
    "            line = np.mean(lines, axis=0, dtype=np.int32)\n",
    "            line = tuple(map(tuple, line))\n",
    "        return line\n",
    "\n",
    "    def process(self, image):\n",
    "        white        = select_white(image)\n",
    "        gray         = convert_to_grayscale(white)\n",
    "        smooth_gray  = smoothing(gray)\n",
    "        edges        = detect_edges(smooth_gray)\n",
    "        regions      = select_region(edges)\n",
    "        lines        = hough_lines(regions)\n",
    "        left_line, right_line = lane_lines(image, lines)\n",
    "\n",
    "        left_line  = self.mean_line(left_line,  self.left_lines)\n",
    "        right_line = self.mean_line(right_line, self.right_lines)\n",
    "\n",
    "        return draw_lane_lines(image, (left_line, right_line))\n",
    "    \n",
    "def process_video(video_input, video_output):\n",
    "    detector = LanesDetector()\n",
    "\n",
    "    clip = VideoFileClip(os.path.join('data', video_input))\n",
    "    processed = clip.fl_image(detector.process)\n",
    "    processed.write_videofile(os.path.join('data', video_output), audio=False)\n",
    "\n",
    "%time process_video('singapore_drive.mp4', 'detected_lanes.mp4')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center> <video controls src=\"data/detected_lanes.mp4\" /> </center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<h1><center>But... How?</center></h1>\n",
    "<center><img src=\"images/wtf.png\" alt=\"Wtf?\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<h1><center>Learn the ways of Computer Vision you must...</center></h1>\n",
    "\n",
    "<center><img src=\"images/yoda.png\" alt=\"yoda\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# The tool\n",
    "\n",
    "<center><img src=\"images/opencv.png\" alt=\"opencv\">\n",
    "\n",
    "<h3><a href=\"https://docs.opencv.org\">docs.opencv.org</a></h3></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "import cv2\n",
    "import os\n",
    "import numpy as np\n",
    "from moviepy.editor import VideoFileClip\n",
    "from collections import deque\n",
    "\n",
    "%matplotlib inline\n",
    "%config InlineBackend.figure_format = 'retina'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# What's an image?\n",
    "\n",
    "<center><img src=\"images/philosophator.png\" alt=\"philosophator\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/matrix.png\" alt=\"matrix\"></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def show_image(image, cmap=None):\n",
    "    plt.figure(figsize=(11,11))\n",
    "    if len(image.shape)==2:\n",
    "        cmap = 'gray'\n",
    "    else:\n",
    "        image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
    "    plt.imshow(image, cmap)\n",
    "    plt.xticks([])\n",
    "    plt.yticks([])\n",
    "    plt.show()\n",
    "\n",
    "#Read the image with OpenCV\n",
    "image = cv2.imread('data/test_image.png')\n",
    "\n",
    "show_image(image)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "image.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "image[0,0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Masks:\n",
    "## Matrix with some pixel values to zero and others to non zero\n",
    "## Output of algorithms will be used as a mask\n",
    "## Used to select parts of the image\n",
    "\n",
    "<center><img src=\"images/mask.png\" alt=\"mask\"></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def select_white(image): \n",
    "    # white color mask\n",
    "    lower = np.uint8([200, 200, 200])\n",
    "    upper = np.uint8([255, 255, 255])\n",
    "    white_mask = cv2.inRange(image, lower, upper)\n",
    "   \n",
    "    masked = cv2.bitwise_and(image, image, mask = white_mask)\n",
    "    return masked\n",
    "\n",
    "show_image(select_white(image))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Images Representation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/codification.png\" alt=\"Color Representations\"></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def convert_to_hsv(image):\n",
    "    return cv2.cvtColor(image, cv2.COLOR_RGB2HSV)\n",
    "\n",
    "show_image(convert_to_hsv(image))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def convert_to_hls(image):\n",
    "    return cv2.cvtColor(image, cv2.COLOR_RGB2HLS)\n",
    "\n",
    "show_image(convert_to_hls(image))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def select_white(image):\n",
    "    converted = convert_to_hls(image)\n",
    "    # white color mask\n",
    "    lower = np.uint8([  0, 110,   0])\n",
    "    upper = np.uint8([255, 180, 250])\n",
    "    white_mask = cv2.inRange(converted, lower, upper)\n",
    "    masked = cv2.bitwise_and(image, image, mask = white_mask)\n",
    "    return masked\n",
    "\n",
    "white_selection = select_white(image)\n",
    "show_image(white_selection) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Canny Edges\n",
    "\n",
    "## A multi-stage algorithm:\n",
    "\n",
    "#### 1. Gaussian filter\n",
    "#### 2. Finds edge gradient and direction for each pixel\n",
    "\n",
    "<img src=\"images/gradient.png\" alt=\"Gradient\">\n",
    "\n",
    "#### 3. Pixels are checked for local maximum in its neighborhood in the direction of gradient.\n",
    "#### 4. Hysteresis Thresholding: Uses minVal and maxVal as threshold of intensity gradient to finally detect the edges. Those in between are selected based on their connectivity.\n",
    "\n",
    "<center><a href=\"https://docs.opencv.org/3.4.2/dd/d1a/group__imgproc__feature.html#ga04723e007ed888ddf11d9ba04e2232de\">cv2.Canny()</a></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Canny Edges\n",
    "\n",
    "### Good For:\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>Edge detection</li>\n",
    "  <li>Preprocessing of images prior to lines or shapes detection</li>\n",
    "</ul>\n",
    "</h4>\n",
    "\n",
    "### Major Drawbacks:\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>Edge detection is sensitive to noise in the image</li>\n",
    "  <li>General thresholding problems apply</li>\n",
    "</ul>\n",
    "</h4>\n",
    "    \n",
    "    \n",
    "\n",
    "### Important tips:\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>The smoothing directly affects results</li>\n",
    "  <li>Larger blurry kernels are more useful for detecting larger smother edges</li>\n",
    "</ul>\n",
    "</h4>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Convert to grayscale\n",
    "\n",
    "<h3><center> Since Canny measures the gradient </center></h3>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def convert_to_gray_scale(image):\n",
    "    return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n",
    "\n",
    "gray_selection = convert_to_gray_scale(white_selection)\n",
    "show_image(gray_selection)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Gaussian Blur\n",
    "## Remove Noise from Images\n",
    "## Blur images using Gussian filter\n",
    "\n",
    "<center><a href=\"https://docs.opencv.org/3.4.2/d4/d86/group__imgproc__filter.html#gaabe8c836e97159a9193fb0b11ac52cf1\">cv2.GaussianBlur()</a></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Gaussian Blur\n",
    "\n",
    "<center><img src=\"images/convolution.jpg\" alt=\"convolution\"> <img src=\"images/gaussian.png\" alt=\"gaussian\"> </center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    " # Gaussian Blur\n",
    " \n",
    " ## Edges: intensity changes rapidly\n",
    " ## Making edges smother to reduce noise\n",
    " ## Kernel size (smaller values if effect is similar) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def smoothing(image, kernel_size=15):\n",
    "    return cv2.GaussianBlur(image, (kernel_size, kernel_size), 0)\n",
    "\n",
    "smooth_image =  smoothing(gray_selection)\n",
    "show_image(smooth_image)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Canny Edges (OpenCV)\n",
    "\n",
    "### Selection based on pixel gradient :\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>Higher than the upper threshold: Accepted</li>\n",
    "  <li>Below the lower threshold: Rejected</li>\n",
    "  <li>Between the two: Accepted if connected to a pixel that is above the upper threshold </li>\n",
    "</ul>\n",
    "</h4>\n",
    "\n",
    "### Important tips:\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>Recommended a upper:lower ratio between 2:1 and 3:1</li>\n",
    "  <li>Use trials and errors</li>\n",
    "</ul>\n",
    "</h4>\n",
    "\n",
    "<h3><center><a href=\"https://docs.opencv.org/3.4.2/da/d22/tutorial_py_canny.html\">Canny Edge Detection OpenCV Tutorial</a></center></h3>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def detect_edges(image, low_threshold=10, high_threshold=100):\n",
    "    return cv2.Canny(image, low_threshold, high_threshold)\n",
    "\n",
    "edges_image =  detect_edges(smooth_image)\n",
    "show_image(edges_image)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Region Of Interest (ROI)\n",
    "\n",
    "<br>\n",
    "<h3><center>Exclude the rest of the image by applying a mask</center><h3>\n",
    "\n",
    "<center><img src=\"images/roi.png\" alt=\"roi\"></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0,
     8
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def filter_region(image, vertices):\n",
    "    mask = np.zeros_like(image)\n",
    "    if len(mask.shape)==2:\n",
    "        cv2.fillPoly(mask, vertices, 255)\n",
    "    else:\n",
    "        cv2.fillPoly(mask, vertices, (255,)*mask.shape[2]) # in case, the input image has a channel dimension        \n",
    "    return cv2.bitwise_and(image, mask)\n",
    "  \n",
    "def select_region(image):\n",
    "    rows, cols = image.shape[:2]\n",
    "    bottom_left  = [int(cols*0.2), int(rows*0.95)]\n",
    "    top_left     = [int(cols*0.5), int(rows*0.65)]\n",
    "    bottom_right = [int(cols*0.95), int(rows*0.95)]\n",
    "    top_right    = [int(cols*0.65), int(rows*0.65)] \n",
    "    vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32)\n",
    "    #image = cv2.line(image, tuple(bottom_left), tuple(top_left), (255,0,0), thickness=5)\n",
    "    #image = cv2.line(image, tuple(bottom_right), tuple(top_right), (255,0,0), thickness=5)\n",
    "    return filter_region(image, vertices)\n",
    "\n",
    "roi_image = select_region(edges_image)\n",
    "show_image(roi_image)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "#  Hough Transform\n",
    "\n",
    "## Detect any shape (mathematical form)\n",
    "## Uses a voting procedure (acumulator)\n",
    "\n",
    "\n",
    "<h3><center><a href=\"https://docs.opencv.org/3.4.2/d6/d10/tutorial_py_houghlines.html\">Hough Line Transform OpenCV Tutorial</a></center></h3>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "#  Hough Transform: Line\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>rho: Distance resolution of the accumulator in pixels.</li>\n",
    "  <li>theta: Angle resolution of the accumulator in radians.</li>\n",
    "  <li>threshold: Accumulator threshold parameter. Only those lines are returned that get enough votes (> `threshold`)</li>\n",
    "  <li>minLineLength: Minimum line length. Line segments shorter than that are rejected</li>\n",
    "  <li>maxLineGap: Maximum allowed gap between points on the same line to link them</li>\n",
    "</ul>\n",
    "</h4>\n",
    "<h3><center><a href=\"https://docs.opencv.org/3.4.2/dd/d1a/group__imgproc__feature.html#ga8618180a5948286384e3b7ca02f6feeb\">cv2.HoughLinesP()</a></center></h3>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def draw_lines(image, lines, color=[0, 255, 0], thickness=2, make_copy=True):\n",
    "    # the lines returned by cv2.HoughLinesP has the shape (-1, 1, 4)\n",
    "    if make_copy:\n",
    "        image = np.copy(image) # don't want to modify the original\n",
    "    for line in lines:\n",
    "        for x1,y1,x2,y2 in line:\n",
    "            cv2.line(image, (x1, y1), (x2, y2), color, thickness)\n",
    "    return image\n",
    "def hough_lines(image):\n",
    "    return cv2.HoughLinesP(image, rho=1, theta=np.pi/180, threshold=5, minLineLength=5, maxLineGap=200)\n",
    "\n",
    "lines = hough_lines(roi_image)\n",
    "image_with_lines = draw_lines(image, lines)\n",
    "show_image(image_with_lines)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Average lines\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>Multiple lines are detected for each lane</li>\n",
    "  <li>Only partially recognized</li>\n",
    "  <li>Extrapolate line to cover the full lane</li>\n",
    "  <li>Two lanes:</li>\n",
    "  <ul>\n",
    "      <li>left with positive slope</li>\n",
    "      <li>right with negative slope</li>\n",
    "  </ul>\n",
    "</ul>\n",
    "</h4>\n",
    "\n",
    "### Note: image has the `y` reversed"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def average_slope_intercept(lines):\n",
    "    left_lines    = [] # (slope, intercept)\n",
    "    left_weights  = [] # (length,)\n",
    "    right_lines   = [] # (slope, intercept)\n",
    "    right_weights = [] # (length,)\n",
    "    for line in lines:\n",
    "        for x1, y1, x2, y2 in line:\n",
    "            if x2==x1:\n",
    "                continue # ignore a vertical line\n",
    "            slope = (y2-y1)/(x2-x1)\n",
    "            intercept = y1 - slope*x1\n",
    "            length = np.sqrt((y2-y1)**2+(x2-x1)**2)\n",
    "            if slope < 0: # y is reversed in image\n",
    "                left_lines.append((slope, intercept))\n",
    "                left_weights.append((length))\n",
    "            else:\n",
    "                right_lines.append((slope, intercept))\n",
    "                right_weights.append((length))   \n",
    "    left_lane  = np.dot(left_weights,  left_lines) /np.sum(left_weights)  if len(left_weights) >0 else None\n",
    "    right_lane = np.dot(right_weights, right_lines)/np.sum(right_weights) if len(right_weights)>0 else None\n",
    "    \n",
    "    return left_lane, right_lane "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Drawing the lanes \n",
    "\n",
    "<h2> <center> Need to convert to pixel points for cv2.line() </center> </h2>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def calculate_line_points(y1, y2, line):\n",
    "    if line is None:\n",
    "        return None\n",
    "    \n",
    "    slope, intercept = line\n",
    "    x1 = int((y1 - intercept)/slope)\n",
    "    x2 = int((y2 - intercept)/slope)\n",
    "    y1 = int(y1)\n",
    "    y2 = int(y2)\n",
    "    \n",
    "    return ((x1, y1), (x2, y2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Drawing the lanes \n",
    "\n",
    "## Each line is a list of x1, y1, x2, y2\n",
    "\n",
    "<h4>\n",
    "<ul>\n",
    "  <li>Each line is a list of x1, y1, x2, y2</li>\n",
    "  <li>Use cv2.line() to draw the lines</li>\n",
    "  <li>Use cv2.addWeighted() to mix the images</li>\n",
    "</ul>\n",
    "</h4>\n",
    "\n",
    "<h3><center><a href=\"https://docs.opencv.org/3.4.1/d6/d6e/group__imgproc__draw.html#ga7078a9fae8c7e7d13d24dac2520ae4a2\">cv2.line()</a></center></h3>\n",
    "<h3><center><a href=\"https://docs.opencv.org/3.4.1/d2/de8/group__core__array.html#gafafb2513349db3bcff51f54ee5592a19\">cv2.addWeighted()</a></center></h3>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     0,
     10
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def lane_lines(image, lines):\n",
    "    left_lane, right_lane = average_slope_intercept(lines)\n",
    "    y1 = image.shape[0]\n",
    "    y2 = y1*0.7         \n",
    "    left_line  = calculate_line_points(y1, y2, left_lane)\n",
    "    right_line = calculate_line_points(y1, y2, right_lane)\n",
    "    return left_line, right_line \n",
    " \n",
    "def draw_lane_lines(image, lines, color=[0, 255, 0], thickness=20):\n",
    "    line_image = np.zeros_like(image)\n",
    "    for line in lines:\n",
    "        if line is not None:\n",
    "            cv2.line(line_image, *line,  color, thickness)\n",
    "            \n",
    "    return cv2.addWeighted(image, 1.0, line_image, 0.9, 0.0)\n",
    "             \n",
    "lanes_detected = draw_lane_lines(image, lane_lines(image, lines))\n",
    "    \n",
    "show_image(lanes_detected)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# C.V. Pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "code_folding": [
     5
    ],
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "class LanesDetector:\n",
    "    def __init__(self):\n",
    "        self.left_lines  = deque(maxlen=50)\n",
    "        self.right_lines = deque(maxlen=50)\n",
    "        \n",
    "    def mean_line(self, line, lines):\n",
    "        if line is not None:\n",
    "            lines.append(line)\n",
    "\n",
    "        if len(lines)>0:\n",
    "            line = np.mean(lines, axis=0, dtype=np.int32)\n",
    "            line = tuple(map(tuple, line))\n",
    "        return line\n",
    "\n",
    "    def process(self, image):\n",
    "        white_yellow = select_white(image)\n",
    "        gray         = convert_to_gray_scale(white_yellow)\n",
    "        smooth_gray  = smoothing(gray)\n",
    "        edges        = detect_edges(smooth_gray)\n",
    "        regions      = select_region(edges)\n",
    "        lines        = hough_lines(regions)\n",
    "        left_line, right_line = lane_lines(image, lines)\n",
    "\n",
    "\n",
    "\n",
    "        left_line  = self.mean_line(left_line,  self.left_lines)\n",
    "        right_line = self.mean_line(right_line, self.right_lines)\n",
    "\n",
    "        return draw_lane_lines(image, (left_line, right_line))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Let's process the video!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "outputs": [],
   "source": [
    "def process_video(video_input, video_output):\n",
    "    detector = LanesDetector()\n",
    "\n",
    "    clip = VideoFileClip(os.path.join('data', video_input))\n",
    "    processed = clip.fl_image(detector.process)\n",
    "    processed.write_videofile(os.path.join('data', video_output), audio=False)\n",
    "    \n",
    "%time process_video('singapore_drive.mp4', 'our_brand_new_detected_lanes.mp4')  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center> <video controls src=\"data/our_brand_new_detected_lanes.mp4\" /> </center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<center><img src=\"images/traffic_space.jpg\" alt=\"CV\">\n",
    "\n",
    "<h2> 92% of space is unsused <h2>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "## 1.3 million people die in road crashes each year\n",
    "## An average 3,287 deaths a day\n",
    "## Leading cause of death among young people (15-29)\n",
    "\n",
    "<br>\n",
    "<br>\n",
    "<h4><div align=\"right\">source: Association for Safe International Road Travel</div></h4>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<center><img src=\"images/degree.png\" alt=\"degree\"></center>\n",
    "<h1><center>A Self Driving Cars Degree!</center></h1>\n",
    "<br>\n",
    "<center><img src=\"images/omg.gif\" alt=\"OMG!\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "<center><img src=\"images/selfdrivingcar.png\" alt=\"Self Driving Car\"></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Computer Vision: Sign up now!  \n",
    "<br>\n",
    "<center><img src=\"images/cv.jpg\" alt=\"CV\">\n",
    "<br>\n",
    "<h2> <a href=\"https://www.upcodeacademy.com/courses/11\">upcodeacademy.com/courses/11</a> </h2></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "subslide"
    }
   },
   "source": [
    "# Thanks!\n",
    "<center><img src=\"images/cv.jpg\" alt=\"CV\">\n",
    "<br>\n",
    "<h2> <a href=\"https://www.upcodeacademy.com/courses/11\">upcodeacademy.com/courses/11</a> </h2></center>\n",
    "<br>\n",
    "<h3> <center> marco@upcodeacademy.com </center> </h3>\n",
    "<br><br>\n",
    "<center><img src=\"images/banner.png\" alt=\"upcode banner\"></center>"
   ]
  }
 ],
 "metadata": {
  "celltoolbar": "Slideshow",
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.6"
  },
  "livereveal": {
   "autolaunch": true
  },
  "rise": {
   "theme": "night"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": false,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}