Commit 26c5e72a authored by cn1n18's avatar cn1n18

Update fusing imu into orientation

parent 4e92b96a
......@@ -153,6 +153,8 @@ with the rig.
\end{figure}
\section{Forest environment dataset}
\subsection{Data recording zone}
The data for our forest environment dataset was collected at the Southampton Common\footnote{\url{https://en.wikipedia.org/wiki/Southampton_Common}}, a $1.48~\mbox{Km}^2$ area of woodland, rough grassland and wetlands (see Figure~\ref{fig:path}).
......@@ -201,7 +203,7 @@ All the data from the rotary encoder and IMU streams were time synchronized with
\section{Quality of our forest environment dataset}
To assess the image quality of the depth data in our forest environment dataset we consider, (i) the \textit{fill rate}, which is the percentage of the depth image containing valid pixels (pixels with an estimated depth value), and (ii) the depth accuracy using ground truth data.
To assess the image quality of the depth data in our forest environment dataset we consider, (i) the \textit{fill rate}, which is the percentage of the depth image containing valid pixels (pixels with an estimated depth value), (ii) the depth accuracy using ground truth data, and (iii) the upward-view data
\noindent\textbf{Fill rate of depth images:} In our depth image data, the fill rate may be affected by the movement of the mobile sensor platform through the forest as well as by the luminosity of the scene, influencing exposure and consequently resulting in motion blur effects. For our analysis, the instantaneous velocity and acceleration of the mobile sensor platform was estimated using the rotatory encoder position data. The luminosity or perceived brightness was estimated from the Y luma channel of the RGB converted to YUV color scheme.
......@@ -250,6 +252,8 @@ Our analysis suggest a good quality of depth data of the forest environment, wit
Our results indicate that depth estimated with our mobile sensor platform was close to the ground truth measurements (see Figure~\ref{fig:depth-error}). Across all sampled points P1 to P9, the mean error registered was less than $4\%$. The highest error of $12\%$ was for the point P8, which was positioned furthest from the camera.
\noindent\textbf{Images in upward view:} There are a few data that looks upward so that the ground cannot be included. We aim to record near obstacles against the far background for the depth information. If camera looks downward too much, the colour gradients shown in the depth map are the lower part of the image, which is near, marked as red, and the upper part of the image, which is far, is marked as blue. In such a case, the depth map cannot be capable of recording obstacles. Thus, we have to adjust the camera during recording so that it can acquire more obstacle depth data. By fusing the IMU data (gyro data and accelerometer data) into orientation, we can estimate the orientation pitch angle of frames. Frames without ground in view constitute about $1\%$ to $20\%$ of each video. These frames can be filtered out by discarding frames with pitch value $>$ 4 degrees.
\section{Depth estimation}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment