Editing
State-of-the-art
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Yang, R., Lv, K., Huang, Y., Sun, M., Li, J., & Yang, J. (2023). Respiratory Sound Classification by Applying Deep Neural Network with a Blocking Variable. ''Applied Sciences'', 13(6956). <nowiki>https://doi.org/10.3390/app13126956</nowiki> ==== *Summary: The paper introduces a deep neural network named Blnet for classifying respiratory sounds, incorporating features from ResNet, GoogleNet, and self-attention mechanisms to tackle the non-IID (not independently and identically distributed) data problem and imbalanced data issues. The model demonstrated improved performance on the ICBHI 2017 respiratory sound database, showcasing a significant advancement in sensitivity and specificity rates over existing methods. * RQ: How can a deep neural network be optimized for classifying respiratory sounds to facilitate the early detection of respiratory diseases, considering challenges such as non-IID data and imbalanced datasets? * Hypothesis: The integration of ResNet, GoogleNet, and self-attention mechanisms into a deep neural network, alongside a two-stage training process and mix-up data augmentation within clusters, can significantly improve the classification accuracy of respiratory sounds, even with imbalanced and non-IID data challenges. * Conclusion: The Blnet model successfully addressed the challenges of non-IID and imbalanced datasets in respiratory sound classification, achieving a 4.22% improvement in average score and a 12.61% improvement in sensitivity over state-of-the-art results. This performance enhancement underscores the efficacy of the proposed network architecture and training strategies. * Critical observations: ** The two-stage training process and the introduction of a blocking variable proved effective in managing non-IID data, suggesting the importance of considering data distribution in deep learning models. ** Mix-up data augmentation within clusters and the use of multiple input transformations (STFT and WT) were critical in addressing data imbalance and enhancing model robustness. ** The self-attention mechanism played a key role in capturing global dependencies within the data, improving the model's feature extraction capabilities. ** Simplifying the loss function to handle a four-class classification task as two independent binary classification tasks was found to enhance training effectiveness. * Relevance: The techniques and findings of this study have direct implications for ASR systems, particularly in enhancing model performance with non-IID and imbalanced datasets. The methods for improving feature extraction and classification in the context of respiratory sound analysis can inform approaches to noise reduction, signal processing, and robust model training in ASR technologies. Furthermore, the attention mechanisms and data augmentation strategies could be adapted to improve ASR systems' ability to deal with diverse and challenging acoustic environments.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information