Our health is of myriad importance for our wellbeing. The pandemic caused by the outbreak of the Coronavirus Disease 2019 (COVID-19) has motivated society to reconsider our priorities, lifting health to the very top. This increased awareness paves the way for the development of new digital tools that can help not only society to monitor health states in real-time, but also clinicians to get long-term in-depth reports about their patients, enriching the information available for the diagnosis.
Smart devices are a key component to achieve this goal. These devices, including smartphones and smartwatches, feature a wide range of sensors, which allow capturing information that reflects human states non-intrusively and in a ubiquitous way. To analyse the sensed data, Artificial Intelligence (AI) techniques can be used to detect patterns and biomarkers in the data suitable for the detection of diseases. Holistic IoT-based systems powered with AI have, therefore, the potential to revolutionise the provision of health care as we currently know it.
To control the spread of COVID-19, the detection of cases has proven to be one of the most effective strategies. However, current medical diagnostic tools include PCR or antigen tests, which are expensive, time-consuming, and generate a large amount of waste. To overcome these issues, we envision the use of new digital health solutions that can be used at a large-scale, cost-effectively as a pre-screening tool for the detection of patients with COVID-19 using the microphone embedded in smart devices.
COVID-19 symptomatology includes affectations in the respiratory system. Consequently, one could argue that vocal system-related acoustic signals might contain salient information for the detection of the virus. In this regard, some of our most recent works have focused on the automatic detection of COVID-19 from the analysis of coughs, breaths, and speech.
As the definition of the acoustic features that better characterise the COVID-19 symptomatology is still an open research question, we opted for using Convolutional Neural Networks (CNNs) to extract deep learnt features from the spectrogram representations of the acoustic signals. This way, the CNNs learn to extract the most salient features for the task at hand. These embedded representations can then be used to classify whether the input acoustic signal corresponds to a healthy or a COVID-19 patient.
Digital health solutions based on ubiquitous sensor data coupled with AI can contribute to the early detection of diseases, easing their diagnosis, and the monitoring of patients towards personalised treatment plans. We have the opportunity for technology to be at the service of society, helping us improve our health and overall wellbeing. Let’s take most advantage of it.