Commit 8dfb0c78 authored by cn1n18's avatar cn1n18

Update root.tex

parent 26c5e72a
......@@ -252,7 +252,7 @@ Our analysis suggest a good quality of depth data of the forest environment, wit
Our results indicate that depth estimated with our mobile sensor platform was close to the ground truth measurements (see Figure~\ref{fig:depth-error}). Across all sampled points P1 to P9, the mean error registered was less than $4\%$. The highest error of $12\%$ was for the point P8, which was positioned furthest from the camera.
\noindent\textbf{Images in upward view:} There are a few data that looks upward so that the ground cannot be included. We aim to record near obstacles against the far background for the depth information. If camera looks downward too much, the colour gradients shown in the depth map are the lower part of the image, which is near, marked as red, and the upper part of the image, which is far, is marked as blue. In such a case, the depth map cannot be capable of recording obstacles. Thus, we have to adjust the camera during recording so that it can acquire more obstacle depth data. By fusing the IMU data (gyro data and accelerometer data) into orientation, we can estimate the orientation pitch angle of frames. Frames without ground in view constitute about $1\%$ to $20\%$ of each video. These frames can be filtered out by discarding frames with pitch value $>$ 4 degrees.
\noindent\textbf{Images in upward view:} There are a few data that looks upward so that the ground cannot be included. We aim to record near obstacles against the far background for the depth information. If camera looks downward too much, the colour gradients shown in the depth map are the lower part of the image, which is near, marked as red, and the upper part of the image, which is far, is marked as blue. In such a case, the depth map cannot be capable of recording obstacles. Thus, we have to adjust the camera during recording so that it can acquire more obstacle depth data. By fusing the IMU data (gyro data and accelerometer data) into orientation, we can estimate the pitch angle of frames. Our analysis suggest frames without ground in view constitute about $1\%$ to $20\%$ of each video. These frames can be filtered out by discarding frames with pitch value $>$ 4 degrees.
\section{Depth estimation}
......@@ -282,7 +282,7 @@ Our results indicate that depth estimated with our mobile sensor platform was cl
\caption{Evaluation metrics.}
\begin{tabular}[t]{cccccc}
\hline
$\delta$$_1$$\uparrow$&$\delta$$_2$$\uparrow$&$\delta$$_1$$\uparrow$&$rel$$\downarrow$&$rms$$\downarrow$&$log10$$\downarrow$\\
$\delta$$_1$$\uparrow$&$\delta$$_2$$\uparrow$&$\delta$$_3$$\uparrow$&$rel$$\downarrow$&$rms$$\downarrow$&$log10$$\downarrow$\\
\hline
0.678&0.839&0.881&-&37.911&0.282\\
\hline
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment