DeepVS: a deep learning approach for RF-based vital signs sensing

Zongxing Xie, Hanrui Wang, Song Han, Elinor Schoenfeld, Fan Ye
Stony Brook University, MIT
(* indicates equal contribution)

News

Waiting for more news.

Awards

No items found.

Competition Awards

No items found.

Abstract

Vital signs (e.g., heart and respiratory rate) are indicative for health status assessment. Efforts have been made to extract vital signs using radio frequency (RF) techniques (e.g., Wi-Fi, FMCW, UWB), which offer a non-touch solution for continuous and ubiquitous monitoring without users' cooperative efforts. While RF-based vital signs monitoring is user-friendly, its robustness faces two challenges. On the one hand, the RF signal is modulated by the periodic chest wall displacement due to heartbeat and breathing in a nonlinear manner. It is inherently hard to identify the fundamental heart and respiratory rates (HR and RR) in the presence of higher order harmonics of them and intermodulation between HR and RR, especially when they have overlapping frequency bands. On the other hand, the inadvertent body movements may disturb and distort the RF signal, overwhelming the vital signals, thus inhibiting the parameter estimation of the physiological movement (i.e., heartbeat and breathing). In this paper, we propose DeepVS, a deep learning approach that addresses the aforementioned challenges from the non-linearity and inadvertent movements for robust RF-based vital signs sensing in a unified manner. DeepVS combines 1D CNN and attention models to exploit local features and temporal correlations. Moreover, it leverages a two-stream scheme to integrate features from both time and frequency domains. Additionally, DeepVS unifies the estimation of HR and RR with a multi-head structure, which only adds limited extra overhead (<1%) to the existing model, compared to doubling the overhead using two separate models for HR and RR respectively. Our experiments demonstrate that DeepVS achieves 80-percentile HR/RR errors of 7.4/4.9 beat/breaths per minute (bpm) on a challenging dataset, as compared to 11.8/7.3 bpm of a non-learning solution. Besides, an ablation study has been conducted to quantify the effectiveness of DeepVS.

Framework Overview

Video

Citation

@inproceedings{10.1145/3535508.3545554,
author = {Xie, Zongxing and Wang, Hanrui and Han, Song and Schoenfeld, Elinor and Ye, Fan},
title = {DeepVS: a deep learning approach for RF-based vital signs sensing},
year = {2022},
isbn = {9781450393867},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi-org.ezproxy.canberra.edu.au/10.1145/3535508.3545554},
doi = {10.1145/3535508.3545554},
articleno = {17},
numpages = {5},
keywords = {vital signs, deep learning, attention mechanism, RF, CNN},
location = {Northbrook, Illinois},
series = {BCB '22}
}

Media

No media articles found.

Acknowledgment

Team Members