A Distribution-Adaptive Attention Transformer for Blood Pressure Prediction

Authors

  • Chen Wang

DOI:

https://doi.org/10.54097/s78vmw93

Keywords:

Blood Pressure Prediction, Distribution-Adaptive Attention, Non-Stationary Sequences, Distribution Disparities.

Abstract

Accurately predicting intraoperative blood pressure to prevent intraoperative hypotension is of significant importance. However, the task of intraoperative blood pressure prediction often faces the challenge of substantial sample distribution disparities due to individual physiological differences among patients and the diversity of surgical conditions. These disparities can cause a decline in model performance when encountering inconsistently distributed samples, thus limiting its generalizability in real-world clinical settings. To address this challenge, this paper develops a distribution-adaptive attention strategy aimed at mitigating distributional differences by incorporating statistical information within the model's internal design. The strategy automatically adjusts the model's attention weights by learning statistical information from input windows, enhancing the capability to learn the inherent distribution patterns within non-stationary sequences. Consequently, it reduces performance fluctuations caused by distributional differences. The effectiveness of the proposed distribution-adaptive attention strategy is demonstrated through multiple comparative experiments in the experimental section.

Downloads

Download data is not yet available.

References

van der Ven W H, Veelo D P, Wijnberge M, et al. One of the first validations of an artificial intelligence algorithm for clinical use: The impact on intraoperative hypotension prediction and clinical decision-making[J]. Surgery, 2021, 169(6): 1300-1303.

Parati G, Ochoa J E, Lombardi C, et al. Assessment and management of blood-pressure variability[J]. Nature Reviews Cardiology, 2013, 10(3): 143-155.

Ariyo A A, Adewumi A O, Ayo C K. Stock price prediction using the ARIMA model[C]//2014 UKSim-AMSS 16th international conference on computer modelling and simulation. IEEE, 2014: 106-112.

Eesa A S, Arabo W K. A normalization methods for backpropagation: a comparative study[J]. Science Journal of University of Zakho, 2017, 5(4): 319-323.

Patro S, Sahu K. Normalization: A preprocessing stage[J]. arXiv preprint arXiv:1503.06462, 2015.

Ogasawara E, Martinez L C, De Oliveira D, et al. Adaptive normalization: A novel data normalization approach for non-stationary time series[C]//The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010: 1-8

Passalis N, Tefas A, Kanniainen J, et al. Deep adaptive input normalization for time series forecasting[J]. IEEE transactions on neural networks and learning systems, 2019, 31(9): 3760-3765.

Du Y, Wang J, Feng W, et al. Adarnn: Adaptive learning and forecasting of time series[C]//Proceedings of the 30th ACM international conference on information & knowledge management. 2021: 402-411.

Kim T, Kim J, Tae Y, et al. Reversible instance normalization for accurate time-series forecasting against distribution shift [C]//International Conference on Learning Representations. 2021.

Su P, Ding X R, Zhang Y T, et al. Long-term blood pressure prediction with deep recurrent neural networks[C]//2018 IEEE EMBS International conference on biomedical & health informatics (BHI). IEEE, 2018: 323-328.

Jeong Y S, Kang A R, Jung W, et al. Prediction of blood pressure after induction of anesthesia using deep learning: A feasibility study[J]. Applied Sciences, 2019, 9(23): 5135.

Liu Y, Wu H, Wang J, et al. Non-stationary transformers: Exploring the stationarity in time series forecasting[J]. Advances in Neural Information Processing Systems, 2022, 35: 9881-9893.

Zhou H, Zhang S, Peng J, et al. Informer: Beyond efficient transformer for long sequence time-series forecasting [C]//Proceedings of the AAAI conference on artificial intelligence. 2021, 35(12): 11106-11115.

Li S, Jin X, Xuan Y, et al. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting[J]. Advances in neural information processing systems, 2019, 32.

Kheng C P, Rahman N H. The use of end-tidal carbon dioxide monitoring in patients with hypotension in the emergency department[J]. International journal of emergency medicine, 2012, 5(1): 1-7.

Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.

Clevert D A, Unterthiner T, Hochreiter S. Fast and accurate deep network learning by exponential linear units (elus)[J]. arXiv preprint arXiv:1511.07289, 2015.

Bhanja S, Das A. Impact of data normalization on deep neural network for time series forecasting[J]. arXiv preprint arXiv:1812.05519, 2018.

Ing C K. Accumulated prediction errors, information criteria and optimal forecasting for autoregressive time series[J]. 2007.

Downloads

Published

27-03-2024

Issue

Section

Articles

How to Cite

A Distribution-Adaptive Attention Transformer for Blood Pressure Prediction. (2024). Academic Journal of Science and Technology, 10(1), 456-462. https://doi.org/10.54097/s78vmw93

Similar Articles

1-10 of 321

You may also start an advanced similarity search for this article.