Commit 4e92b96a authored by cn1n18's avatar cn1n18

Update root.tex

parent f092d112
......@@ -75,22 +75,9 @@ reconceptualisation of swarms as scalable groups of robots acting
jointly over distances up to 1~km. Such robots need to be low cost
and high in autonomy.
Reliable terrain analysis is a key requirement for a mobile robot to
operate safely in challenging environments, such as in natural outdoor
settings. To enable safe autonomous operation of a swarm of robots
Safely navigating mobile robots in off-road environments such as in a forest, require real-time and accurate terrain traversability analysis. To enable safe autonomous operation of a swarm of robots
during exploration, the ability to accurately estimate terrain
traversability is critical. The terrain traversability is a complex
function of both the terrain characteristics, such as slopes,
vegetation, rocks, etc and the robot mobility characteristics
\cite{balta2013terrain}. To support this analysis for off-road path
planning we are developing a vision system capable of running on small
on-board computers. To also keep the cost of sensor low, we are
interested in monocular depth estimation \cite{bhoi2019monocular} to
predict local depth from single images or sequences of images from a
single moving camera. Aside from optical flow and geometric
techniques, machine learning has been applied to achieve this.
A number of authors have trained depth estimation models by using deep
neural network architectures (\cite{godard2017unsupervised,xu2018structured,eigen2014depth,laina2016deeper,alhashim2018high}).
traversability is critical. By analyzing terrain geometry features such as depth map or point cloud, and appearance characteristics such as colour or texture, combining robot locomotion and its mechanical structure, researchers can be to design a function of terrain analysis involving above \cite{balta2013terrain}. To support this analysis for off-road path planning we are developing a vision system capable of running on small on-board computers. To also keep the cost of sensor low, we are interested in monocular depth estimation \cite{bhoi2019monocular} to predict local depth from single images or sequences of images from a single moving camera. Aside from optical flow and geometric techniques, machine learning has been applied to achieve this. A number of authors have trained depth estimation models by using deep neural network architectures (\cite{godard2017unsupervised,xu2018structured,eigen2014depth,laina2016deeper,alhashim2018high}).
Most existing outdoor depth map datasets focus on the usage of unmanned driving. The KITTI dataset \cite{geiger2013vision} records street scenes in cities. The Freiburg Forest dataset \cite{valada16iser} is mainly to record the whole forest, although it records the distant view of the forest, it lacks of close-range data such as a tree, tree branch, leaf, and bush. Since the dataset has to be manually labeled for image segmentation, it only has 366 images. That is far from enough for deep neural network training. Make-3D dataset (\cite{saxena2008make3d,saxena2007learning}) records many outdoor scenes and some close-up depth data, but it is mainly concentrated on the buildings in the city. We have browsed most of RGB-D Datasets and found out most of them are recorded indoors \cite{firman2016rgbd}. Although the current indoor depth map datasets \cite{Silberman:ECCV12} record close-range depth data, it is not taken from the natural outdoor environment. Thus neither of current indoor and outdoor datasets is suitable for our research.
......@@ -172,7 +159,7 @@ The data for our forest environment dataset was collected at the Southampton Com
\begin{figure}
\centering
\includegraphics[width=3in]{path.png}
\caption{\textbf{GPS satellite map.} This figure shows one of the GPS trajectories in Southampton common, GPS information is included in the metadata of the CSV file set}
\caption{\textbf{GPS satellite map.} This figure shows our recording zone of the satellite map in Southampton common. The orange line represents GPS trajectories, which is one of 5 runs. The white bar in the lower-left corner denotes that scale of the above map is 30 meters. GPS information across all of the runs is included in the metadata of the CSV file set.}
\label{fig:path}
\end{figure}
......@@ -233,8 +220,8 @@ Our analysis suggest a good quality of depth data of the forest environment, wit
\begin{figure}
\centering
\includegraphics[width=3.5in]{VLF.pdf}
\caption{\textbf{Velocity, Luminosity and Fill rate} This figure shows the correlation between velocity - fill rate and luminosity - fill rate respectively. The color scale has been added to the velocity axis, and the color from blue to red indicates that velocity is from low to high. }
\includegraphics[width=3.2in]{VLF.pdf}
\caption{\textbf{Velocity, Luminosity and Fill rate} This figure shows the correlation between velocity - fill rate and luminosity - fill rate respectively. Data points are randomly selected from all of five runs}
\label{fig:vlf}
\end{figure}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment