# Udacity AI for Robotics - Week 5

## Introduction

Previously discrete robot motion was discussed, where the world was split into discrete nodes and a path was found through these discrete nodes. This is impractical for the real world, as the paths and plans found does not suit the actual motion of robots, and are often sub-optimal.

## Smoothing

In order to produce continuous robot motion, a smoothing algorithm is applied to the discrete path found by techniques covered in Week 4, rather than tackling the complex and expensive task of finding a path in a continuous problem space.

Going back to Week 4, where the world was discretized into cells or nodes. Consider a world of $$n$$ nodes, and label each node $$x_i$$, where the range of nodes is $$x_0 \ldots x_{n-1}$$.
The general idea behind the smoothing algorithm is as such:

1. $$y_i = x_i$$
2. $$min (x_i – y_i)^2$$ and $$min (y_i + y_{i+1})^2$$

where $$y_i$$ is the new, smoothed $$x_i$$. Step 2 is minimizing the difference between the new, smoothed node and the old discrete node, and minimizing the difference between the new smoothed node and the next smoothed node. These minimizations are competing, so they are performed with a weighting so as to control the smoothness of a path vs. the exactness of the path:
$y_i = y_i + \alpha (x_i – y_i)^2$

The last equation can be improved by comparing the points both before and after the current point:
$y_i = y_i + \beta (y_{i+1} + y_{i-1} – 2y_i)$
The above shows that $$y_i$$ is simulatenously close to both $$y_{i+1}$$ and $$y_{i-1}$$.

$$\beta$$ controls how much smooting is done, and $$\alpha$$ controls how close to the original path the smoothed path stays.

### Code

#! /usr/bin/env python
# -----------
# User Instructions
#
# Define a function smooth that takes a path as its input
# (with optional parameters for weight_data, weight_smooth)
# and returns a smooth path.
#
# Smoothing should be implemented by iteratively updating
# each entry in newpath until some desired level of accuracy
# is reached. The update should be done according to the
# gradient descent equations given in the previous video:
#
# Note that you do not need to use the tolerance parameter
# shown in the video.
#
# If your function isn't submitting it is possible that the
# runtime is too long. Try sacrificing accuracy for speed.
# -----------

from math import *

# Don't modify path inside your function.
path = [[0, 0],
[0, 1],
[0, 2],
[1, 2],
[2, 2],
[3, 2],
[4, 2],
[4, 3],
[4, 4]]

# ------------------------------------------------
# smooth coordinates
#

def smooth(path, weight_data = 0.5, weight_smooth = 0.1, tolerance = 0.000001):

# Make a deep copy of path into newpath
newpath = [[0 for row in range(len(path[0]))] for col in range(len(path))]
for i in range(len(path)):
for j in range(len(path[0])):
newpath[i][j] = path[i][j]

#### ENTER CODE BELOW THIS LINE ###
diff = tolerance
while diff >= tolerance:
diff = 0.
for i in range(1, len(path)-1):
for j in range(len(path[0])):
aux = newpath[i][j]
newpath[i][j] += weight_data * (path[i][j] - newpath[i][j])
newpath[i][j] += weight_smooth * (newpath[i-1][j] + newpath[i+1][j] - (2. * newpath[i][j]))
diff += abs(aux - newpath[i][j])

return newpath # Leave this line for the grader!

# feel free to leave this and the following lines if you want to print.
newpath = smooth(path)

# thank you - EnTerr - for posting this on our discussion forum
for i in range(len(path)):
print '['+ ', '.join('%.3f'%x for x in path[i]) +'] -> ['+ ', '.join('%.3f'%x for x in newpath[i]) +']'


## PID Control

PID control is a vast field, so only the basics will be covered.

### P Controller

A P controller will align an agent to a predefined path, called a reference trajectory. It does so by correcting the heading of the agent proportionally to the crosstrack error, giving the P in the P controller The crosstrack error is the error between the desired trajectory and the current trajectory. The larger the CTE, the more the heading is changed to reduce the CTE.

A rough way of modelling this is with the following formula:

$\alpha = – \tau \; \text{CTE}$

However, the above equation will overshoot the reference trajectory, as although the wheels will be pointing straight once the agent is on the target trajectory, the agent itself will not be straight.
This will cause it to oscillate between overshooting and undershooting the reference trajectory. This state is known as being marginally stable, or just stable.

To solve the oscillation with the P controller, there is the PD controller

### PD Controller

The PD controller is a P controller augmented by taking into consideration the rate of change of the CTE (differential CTE, D in PD controller). If the rate of change of the CTE is high, the steering backs off quite a bit. As the rate of change of the CTE drops, the steering approaches that of a P controller more accurately. This can be modelled as follows:
$\alpha = – \tau_P \; \text{CTE} \; – \tau_D \frac{\delta}{\delta t} \text{CTE}$

### Bias

A problem with real life agents is one of steering bias. The above models assume the agent is absolutely aligned, however the actual bearing of an agent may not be it’s actual bearing – this is a bias in it’s steering. This will cause the P and PD controllers to have a consistent CTE error that is proportional to the bias.

The solution is to correct for the observed error over a time window with the PID controller

### PID Controller

The PID controller is a PD controller augmented by taking into consideration the integral of the CTE, (I in PID controller) or the observed error over time. This can be modelled with the following:
$\alpha = – \tau_P \; \text{CTE} \; – \tau_D \frac{\delta}{\delta t} \text{CTE} \; – \tau_I \sum \text{CTE}$

### Coordinate Ascent or Twiddle

Choosing control gains ($$\tau_P, \tau_D, \tau_I$$) poses a small challenge. Coordinate ascent, or ‘twiddle’ can be used to find the optimal control gains.

Psuedo code for the ‘twiddle’ algorithm is as follows:

run():
return a value of quality
typically avg CTE

p = [0, 0, 0]
dp = [1, 1, 1]
best_err = run()
while sum(dp) > threshold:
for i in range(len(p)):
p[i] += dp[i]
err = run(p)
if err < best_err:
best_err = err
dp[i] *= 1.1
else:
p[i] -= 2 * dp[i]
err = run(p)
if err < best_err:
best_err = err
dp[i] *= 1.1
else:
p[i] += dp[i]
dp[i] *= 0.9


This algorithm is doing the following:

Initialize p to be blank control gains, and dp to be the amount to change the p on each step. Run the PID controller with blank control gains, and store that error.
While the sum of the dp is higher than sum threshold (which will indicate how precise the optimisation will be), do the following:
For each control param independently, add the delta and run the PID controller. If the result is the best thus far, store it. If not, try subtracting the delta and run the PID controller. If the result is the best thus far, store it. If not, reduce the delta and wait to try again.

This will find the optimal control gain within a tolerance threshold.

### Code:

# -----------
# User Instructions
#
# Implement a P controller by running 100 iterations
# of robot motion. The steering angle should be set
# by the parameter tau so that:
#
# steering = -tau_p * CTE - tau_d * diff_CTE - tau_i * int_CTE
#
# where the integrated crosstrack error (int_CTE) is
# the sum of all the previous crosstrack errors.
# This term works to cancel out steering drift.
#
# Your code should print a list that looks just like
# the list shown in the video.
#
# Only modify code at the bottom!
# ------------

from math import *
import random

# ------------------------------------------------
#
# this is the robot class
#

class robot:

# --------
# init:
#    creates robot and initializes location/orientation to 0, 0, 0
#

def __init__(self, length = 20.0):
self.x = 0.0
self.y = 0.0
self.orientation = 0.0
self.length = length
self.steering_noise = 0.0
self.distance_noise = 0.0
self.steering_drift = 0.0

# --------
# set:
# sets a robot coordinate
#

def set(self, new_x, new_y, new_orientation):

self.x = float(new_x)
self.y = float(new_y)
self.orientation = float(new_orientation) % (2.0 * pi)

# --------
# set_noise:
# sets the noise parameters
#

def set_noise(self, new_s_noise, new_d_noise):
# makes it possible to change the noise parameters
# this is often useful in particle filters
self.steering_noise = float(new_s_noise)
self.distance_noise = float(new_d_noise)

# --------
# set_steering_drift:
# sets the systematical steering drift parameter
#

def set_steering_drift(self, drift):
self.steering_drift = drift

# --------
# move:
#    steering = front wheel steering angle, limited by max_steering_angle
#    distance = total distance driven, most be non-negative

def move(self, steering, distance,
tolerance = 0.001, max_steering_angle = pi / 4.0):

if steering > max_steering_angle:
steering = max_steering_angle
if steering < -max_steering_angle:
steering = -max_steering_angle
if distance < 0.0:
distance = 0.0

# make a new copy
res = robot()
res.length         = self.length
res.steering_noise = self.steering_noise
res.distance_noise = self.distance_noise
res.steering_drift = self.steering_drift

# apply noise
steering2 = random.gauss(steering, self.steering_noise)
distance2 = random.gauss(distance, self.distance_noise)

# apply steering drift
steering2 += self.steering_drift

# Execute motion
turn = tan(steering2) * distance2 / res.length

if abs(turn) < tolerance:

# approximate by straight line motion

res.x = self.x + (distance2 * cos(self.orientation))
res.y = self.y + (distance2 * sin(self.orientation))
res.orientation = (self.orientation + turn) % (2.0 * pi)

else:

# approximate bicycle model for motion

cx = self.x - (sin(self.orientation) * radius)
cy = self.y + (cos(self.orientation) * radius)
res.orientation = (self.orientation + turn) % (2.0 * pi)
res.x = cx + (sin(res.orientation) * radius)
res.y = cy - (cos(res.orientation) * radius)

return res

def __repr__(self):
return '[x=%.5f y=%.5f orient=%.5f]'  % (self.x, self.y, self.orientation)

############## ADD / MODIFY CODE BELOW ####################

# ------------------------------------------------------------------------
#
# run - does a single control run.

def run(param1, param2, param3):
myrobot = robot()
myrobot.set(0.0, 1.0, 0.0)
speed = 1.0 # motion distance is equal to speed (we assume time = 1)
N = 100
myrobot.set_steering_drift(10.0 / 180.0 * pi) # 10 degree bias, this will be added in by the move function, you do not need to add it below!
#
# Enter code here
#
last_cte = myrobot.y
sum_cte = 0
for _ in range(N):
cte = myrobot.y
steering = - param1 * cte - param2 * (cte - last_cte) - param3 * sum_cte
myrobot = myrobot.move(steering, speed)
last_cte = cte
sum_cte += cte
print myrobot, steering

# Call your function with parameters of (0.2, 3.0, and 0.004)
run(0.2, 3.0, 0.004)