You are on page 1of 3

10.06.

2023

Pedestrian Detection – Aggregated Channel Features Intro

Overview

1. Aggregated Channel Features – Descriptors


2. Week Classifiers – Regression Trees
3. Boosted Trees (Modified Discrete AdaBoost) – Learning Algorithm

Tasks Aggregated Channel Features


Sliding Window Object Detection (Pedestrian Detection) • LUV – is computed by converting the input RGB Image into LUV Color Space

32‐by‐32

1
10.06.2023

Aggregated Channel Features Aggregated Channel Features


• LUV – is computed by converting the input RGB Image into LUV Color Space • LUV – is computed by converting the input RGB Image into LUV Color Space
• GM (Gradient Magnitude) – is computed by convolution with a discrete • GM (Gradient Magnitude) – is computed by convolution with a discrete
derivative mask in both horizontal and vertical directions, and captures derivative mask in both horizontal and vertical directions, and captures
undirected edge strength undirected edge strength

32‐by‐32 • GH (Gradient Histograms) – are computed by assigning a weighted vote 32‐by‐32


based on the values found in the gradient computation

Boosted Trees – Learning Algorithm Boosted Trees – Learning Algorithm


Cascade Training

Initialize the weights Given the set of positive 𝒙 , 1 and negative 𝒙 , 1 sample sets 𝒙 ∈ ℝ
ACF Training Initialize positive and negative weights
𝑤 𝑖 𝑖 1, … , 𝑁𝑝
Initialization Initialize positive and negative training samples and consider k = 4 levels of cascade length
L=[32, 128, 512, 2048] 1
𝑤 𝑖 𝑖 𝑁𝑝 1, … , 𝑁𝑝 𝑁𝑛
2𝑁𝑛
Cascade Training Train a cascade according to boosting scheme “Cascade Training” considering L(k) cascade length
‐ Loop over L=[32, 128, 512, 2048] ‐ Decision Trees Training Train regression trees according to algorithm “Decision Trees Training”. The learning procedure returns a tree
‐ Loop over L(k) ‐ classifier ℎ 𝒙 and also a training error 𝜀 .
Bootstrapping Evaluate cascade over a larger set of negative samples and improve the negative dataset with hard negatives
32‐by‐32 32‐by‐32
Finish Output final cascade. Calculate 𝛼 𝑚𝑎𝑥 3, 𝑚𝑖𝑛 3, 𝑙𝑜𝑔
For all training samples 𝒙 , 𝑦 calculate score 𝑆 ∑ 𝛼ℎ 𝒙.

Update weights 𝑤 , 𝑖 𝑒 , and normalize all 𝑤 𝑖


Finish Output final classifier as 𝐻 𝒙 𝑠𝑖𝑔𝑛 ∑ 𝛼ℎ 𝒙

2
10.06.2023

Boosted Trees – Learning Algorithm Boosted Trees – Learning Algorithm


Decision Trees Training Stump Training

for m=1 to 2 Given the sample set 𝑆 𝑘, 𝑚 ⊂ 𝑆 0,1 containing 𝒙 , 𝑦 with their associated Consider the feature vector 𝑭 ∈ ℝ , 𝑭 𝑓 ,…,𝑓,…,𝑓
sample weights 𝑤 𝑖 , 𝒙 ∈ ℝ
Create a tree Node N(k,m) with the set 𝑆 𝑘, 𝑚 of samples and their associated
weights Given the weighted probability distribution (PDF) of elements 𝑓 over all set of w𝑃𝐷𝐹𝑝 𝑘, 𝑗 ∑ 𝑤 𝑖 𝑓 𝑘 , k = 0..255 over positive samples

positive samples i: w𝑃𝐷𝐹𝑛 𝑘, 𝑗 ∑ 𝑤 𝑖 𝑓 𝑘 over negative samples the
if k = NLevels+1 mark it as a leaf Calculate leaf prior error 𝜀 ∑ ∑ cumulative weighted distributions (CDF) can be calculated as:
Calculate leaf score 𝑆𝑙 𝑙𝑜𝑔 – the leaf label 𝐿𝑙 𝑠𝑖𝑔𝑛 𝑆𝑙
𝐶𝐷𝐹𝑝, 𝑛 𝑘, 𝑗 𝑤𝑃𝐷𝐹𝑝, 𝑛 𝑙, 𝑗
Calculate leaf weight as the sum of the sample weights that reach the leaf
𝑊𝑙 ∑ 𝑤
The composite feature that best separates the positive and negative data
Calculate leaf error as 𝜀 min 𝜀 , 1 32‐by‐32
𝜀 is calculated as 32‐by‐32
else “Train a Stump” with sample subset S(k,m) and their associated weights 𝑏 argmax max 𝐶𝐷𝐹𝑝 𝑘, 𝑗 𝐶𝐷𝐹𝑛 𝑘, 𝑗
for k=0 to NrLevels+1
If N 𝑘, 𝑚 is a leaf output the leaf label 𝐿𝑙 The decision rule of the stump will be applied on the b feature as
break 𝑇 argmax 𝐶𝐷𝐹𝑛 𝑘, 𝑏 𝐶𝐷𝐹𝑝 𝑘, 𝑏
end if 𝑓 𝑇 then label the left leaf is 𝑠𝑖𝑔𝑛 𝐶𝐷𝐹𝑝 𝑇, 𝑏 𝐶𝐷𝐹𝑛 𝑇, 𝑏 . The
if 𝑓 𝑘, 𝑚 𝑇 𝑘, 𝑚 m=2 1 samples will be treated as inputs for the node N(k+1,2 1)
else set m=2 else label the right leaf is 𝑠𝑖𝑔𝑛 𝐶𝐷𝐹𝑝 𝑇, 𝑏 𝐶𝐷𝐹𝑛 𝑇, 𝑏 . The
end Output tree structure and 𝜀 samples will be treated as inputs for the node N(k+1,2 ).

You might also like