- \section{System Architecture}
 - \label{sec:design}
 - 
 - We designed a thermal-box to collect the data. It has four Grideye sensors on the
 - corners of a 10 cm square and a Lepton 3 at the central. Figure~\ref{fig:method} shows
 - the system of our method. It consists four parts. The first part is to fuse multiple
 - data from Grideye sensors into a low-resolution image, since the resolution of a
 - single Grideye sensor too low to make a decision. The
 - second part, we train the SRCNN model with fused Grideye image
 - as low-resolution and downscaled Lepton 3 image as high-resolution image.
 - The third part, we use the
 - Super-resolution image to train a neural network model for recognizing current pose
 - is lay on back or lay on side. The last part, to reduce the noise and effect cause by
 - the residual heat on bed after turning over. We
 - remove the noise by median filter, and determine the current pose according to
 - the trend of the possibility from recognition network.
 - 
 - \begin{figure}[tbp]
 - 	\begin{center}
 - 		\includegraphics[width=0.9\linewidth]{figures/method.pdf}
 - 		\caption{Illustration of Proposed Method.}
 - 		\label{fig:method}
 - 	\end{center}
 - \end{figure}
 - 
 - \subsection{Grideye Data Fusion}
 - 
 - On the thermal-box, there are four Grideye sensors. At the beginning, we let
 - the thermal-box faces to an empty bed and records the background temperature.
 - All the following frames will subtract this background temperature. After that,
 - we resize four $8 \times 8$ Grideye images to $64 \times 64$ by bilinear
 - interpolation and then merge them dependence on the distance between thermal-box and
 - bed, distance between sensors and the FOV of Grideye sensor. In our case, $D_B$ is
 - 150 cm, and $D_s$ is 10 cm.
 - 
 - \begin{enumerate}
 - \item $D_b$ is the distance between bed and thermal-box.
 - \item $D_s$ is the width of sensor square also the distance between adjacent sensors.
 - \item $F$ is the FOV of Grideye sensor which is about 60 degree.
 - \item $Overlap = 64 - 64 \times (\frac{D_s}{2 \times D_b \times tan(\frac{F}{2})})$
 - \end{enumerate}
 - 
 - \subsection{Turning Over Determination}
 - 
 - We train a SRCNN model by the fused Grideye image and downscaled Lepton 3 image,
 - and use it to enhance all following Grideye frames to SR frames. We labeled some SR frames
 - into two categories, lay on back and lay on side. Since the input
 - data is very small, we use a neural network consist one 2D convolution layer, one
 - 2D max pooling, one flatten and one densely-connected layer. The possibility of
 - output has a very large various just after turn over because the model cannot
 - distinguish the residual heat on bed and the person as Figure~\ref{fig:residual_heat} shown. This
 - situation will slowly disappear after one or two minutes.
 - 
 - To determination the pose, first we use a median filter with a window size of five
 - to filter out the noise. Then, find the curve hull line of the upper bound and
 - lower bound of the data. Finally, calculate the middle line of upper bound and
 - lower bound, and regrad it as the trend of the pose changing. Figure~\ref{fig:trend}
 - shows the filitered data and these lines.
 - 
 - We divide every data into 10 second time windows. If the middle line of the time window
 - is at the top one fifth, or the trend is going up, it is a lay on back.
 - Otherwise, it is a lay on side. If there are three
 - continuously same poses, and different from the last turning over, it will be count as
 - another turning over.
 - 
 - \begin{figure}[tbp]
 -   \centering
 -   \minipage{0.25\columnwidth}
 -     \includegraphics[width=\linewidth]{figures/Lepton_residual_heat.png}
 -     \caption{Residual heat on bed.}
 -     \label{fig:residual_heat}
 -   \endminipage
 -   \minipage{0.55\columnwidth}
 -     \includegraphics[width=\linewidth]{figures/MinMax_2.pdf}
 -     \caption{Trend of pose.}
 -     \label{fig:trend}
 -   \endminipage
 - \end{figure}
 
 
  |