Readme
This documentation contains weekly assignments for the MPC-MAP course, as well as the semester project assignment and a brief description of the simulation tool.
Please note that this documentation is continuously updated. The final version of each assignment will be available at the given week. The same applies to the semester project—the assignment will be introduced by a lecturer during the semester.
Week 2 - Uncertainty
The goal of this assignment is to become familiar with the simulator and explore uncertainties in the sensors and motion.
Task 1 – The simulator
Download the repository containig the simulator from GitHub https://github.com/Robotics-BUT/MPC-MAP-Student, and become familiar with it (see the Simulator section).
- Explore the
private_vars
,read_only_vars
, andpublic_vars
data structures. - Become familiar with the sequence of the operations in the infinite simulation loop in
main.m
andalgorithms/student_workspace.m
. - Load different maps via
algorithms/setup.m
and set different start positions. - Try various motion commands in
algorithms/motion_control/plan_motion.m
and observe the robot's behaviour.
No output is required in this step.
Task 2 – Sensor uncertainty
The robot is equipped with an 8-way LiDAR and a GNSS receiver. Determine the standard deviation (std) sigma
for the data from both sensors by placing the robot in a static position (zero velocities) in suitable maps and collecting data for at least 100 simulation periods. Discuss whether the std is consistent across individual LiDAR channels and both GNSS axes. Plot histograms of the measurements.
Task 3 – Covariance matrix
Use the measurements from the previous step and MATLAB's internal cov
function to assemble the covariance matrix for both sensors. Verify that the resultant matrix is of size 8×8 for the LiDAR and 2×2 for the GNSS. Ensure that the values on the main diagonal are equal to sigma^2, i.e.,
variance=std^2`.
Task 4 – Normal distribution
Create a function norm_pdf
to assemble the probability density function (pdf) of the normal distribution. The function should accept three arguments: x
(values at which to evaluate the pdf), mu
(mean), and sigma
(standard deviation). Utilize this function along with the sigma
values from Task 2 (e.g., for the 1st LiDAR channel and the X GNSS axis) to generate two pdf illustrating the noise characteristics of the robot's sensors, and plot them in a single image (use mu=0
in both cases).
Task 5 – Motion uncertainty
Uncertainty exists not only in the measurements (sensor data) but also in the motion. Load the indoor_1
map and attempt to navigate the robot to the goal location without using sensors. To achieve this, apply an appropriate sequence of motion commands inside the plan_motion.m
function. Save a screenshot of a successful run and discuss the potential sources of uncertainty in the robot's motion.
Submission
To implement the tasks, use only the algorithms
directory; do not modify the rest of the simulator. The solution must run without errors in a fresh instance of the simulator and must generate the graphical outputs included in the report.
Create a single A4 report to the provided template that briefly describes your solution, with a few sentences for each task and an image where applicable.
Send the report and a zip
archive containing the algorithms
directory to the lecturer's e-mail by Wednesday at 23:59 next week.
For those using Git for version control, you can send a link to your public GitHub repository instead of the zip
file. The repository must contain the simulator with the algorithms
directory with your solution. Please tag the final version with week_2
tag to ensure easy identification.
Week 3 - Motion control
To be added.
Week 4 - Particle Filter
To be added.
Week 5 - Kalman Filter and EKF
To be added.
Week 6 - Path Planning
To be added.
Semester Project
To be added.
MATLAB Robot Simulator
The simulator is lightweight, MATLAB-based tool for testing key algorithms utilized for autonomous navigation in mobile robotics. Basically, it integrates differential drive mobile robot model equipped with two different sensors (lidar and GNSS), and enables to deploy it within custom 2D maps. The main goal is to navigate the robot from start to goal position; for this reason, several algorithms must be implemented:
- Localization: two algorithms are needed - for outddor and indoor. The pose may be estimated via Extended Kalman Filter and GNSS data in outdoor areas; for indoor, the algorithm utilizing Particle Filter and known map is more suitable since there is a GNSS denied zone indoor.
- Path planning: an algorithm to find optimal, obstacle-free path from the start to goal location (e.g. A* and Dijkstra's algorithms).
- Motion control: a control strategy to follow the computed path by using the actual estimated pose. This results in the control commands for the individual wheels.
The simulator has been tested in MATLAB R2023b; it may not work correctly in other versions.
Variables
The simulator uses numerous variables to provide its function; however, not all of them can be used/red to solve the task (the robot's true position, for example). The variables are divided into three groups (structures):
- Private variables (
private_vars
): these variables are used in the main script only and not accessible in modifiable student functions. - Read only variables (
read_only_vars
): these are accessible for your code, but not returned to the main script. - Public variables (
public_vars
): feel free to use and modify these variables and add new items to the structure. The majority of student functions return the structure to the main script so you can use it to share variables between the functions.
Except these variable structures, other variables can occur in your workspace, there is no limit to their use.
Simulation Loop
The simulator workspace comprise main script stored in main.m
, which contains the main simulator logic/loop and is used to run the simulation (F5
key). After the initialization part (you are expected to modify the setup.m
file called in the beginning), there is a while true
infinity simulation loop with the following components:
- Check goal: check if goal has been reached.
- Check collision: check whether the robot has not hit the wall.
- Check presence: check whether the robot has not left the arena.
- Check particles: check the particle limit.
- Lidar measurement: read the lidar data and save them into the
read_only_vars
structure. - GNSS measurement: read the GNSS data and save them into the
read_only_vars
structure. - MoCap measurement: read the reference pose from a motion capture system and save it into the
read_only_vars
structure. - Initialization procedure: by default, it is called in the first iteration only; used to init filters and other tasks performed only once.
- Update particle filter: modifies the set of particles used in your particle filter algorithm.
- Update Kalman filter: modifies the mean and variance used in your Kalman filter algorithm.
- Estimate pose: use the filters outputs to acquire the estimate.
- Path planning: returns the result of your path planning algorithm.
- Plan motion: returns the result of your motion control algorithm. Save the result into the
motion_vector
variable ([v_right, v_left]
). - Move robot: physically moves the robot according the
motion_vector
control variable. - GUI rendering: render the simulator state in a Figure window.
- Increment counter: modifies read-only variable
counter
to record the number of finished iterations.
Steps 8 to 13 are located in a separate algorithms/student_workspace.m
function.
Warning! Step 7 is going to be skipped during the final project evaluation. Your solution must not rely on MoCap data!
You should be able to complete all the assignments witnout modifying the main.m
file.
Custom Functions
You are welcome to add as many custom functions in the algorithms folder as you like; however, try to follow the proposed folder structure (e.g., put the Kalman filter-related functions in the kalman_filter folder). You may also arbitrarily modify the content (not header) of the student workspace function (steps 8 to 13 of the simulation loop) and other functions called by it.
Maps and testing
The maps
directory contains several maps in the text file format, which are parsed in real-time when the simulation is started. Use reverse engineering to understand the syntax, and create own maps to test your algorithms thoroughly. In general, the syntax includes the definition of the goal position, map dimensions, wall positions, and GNSS-denied polygons. Do not forget to test various start poses as well (including start angle!); this is adjustable via start_position
variable (setup.m
). For the project evaluation, a comprehensive map comprising both indoor and outdoor areas and GNSS-denied zones will be employed.
GUI
