Seeing through walls with AI
Lately, Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is getting closer and closer to seeing through walls with help of artificial intelligence (AI).
Peoples movements and postures are monitored by means of wireless devices, in the project RF-pose. How does it work? Well, basically, the radio signals that bounce of the body of a human are captured. Then, neural networks are used to analyse the signals.
The analysis leads to a dynamic stick figure that is walking, sitting, doing the exact same the movements as the human on the other side of the wall.
Of course, privacy for future real-world applications is an important concern. Therefore, researchers want to implement a “consent mechanism” in which the person using the device will have to perform a specific series of movements before the system starts measuring.
The team aims at using the system to monitor diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy. Doctors can then gain more insight in the progression of such a disease and adjust medication in an optimal way. Another application the teams is exploring, is to help elderly live independently with fall and injury detection, and detection of unwanted changes in activity pattern. An important benefit of the RF-pose approach is that people don’t have to wear sensors or devices nor need to remember to charges those devices regularly.
The neural network was trained with help of video footage (of the people behind the wall), of which researchers derived stick figure. Together with the video footage they captured the corresponding radio signals. This way they had both signals and the correct label: the stick figures. The neural net was subsequently trained to make the association between signal and stick figure. Now, after the training stage, the neural net knows how to produce the right stick figure for a radio signal in 83 out of 100 cases. No more video footage of the person behind the wall needed.
Today, the team is working on a 3D stick figure (now it’s only in 2D), and on collecting even smaller movements, like a hand shaking.
For the original article visit this page.
Let’s conclude with a video of the project:
This blog was originally posted here.