A A A Thermoacoustic stability prediction using Deep Learning Renaud GAUDRON 1 , Aimee S. MORGANS 2 Department of Mechanical Engineering, Imperial College London London, United Kingdom ABSTRACT Thermoacoustic instabilities are an undesirable physical phenomenon that can occur in a wide range of combustors. A well-established formalism to predict thermoacoustic stability is based on physics- based models where the combustor is represented by a sequence of connected acoustic modules. This approach has been successfully used to predict the stability of a variety of combustors but can be relatively computationally expensive when a large number of designs is explored. One option to reduce the computational cost of predicting the thermoacoustic stability of a given configuration is to use a data-driven approach as opposed to a physics-based approach. In the former approach, a Machine Learning algorithm is first trained to discriminate between thermoacoustically stable and unstable combustors using examples generated by a (physics-based) acoustic network model. The ML model is then able to predict the thermoacoustic stability of an unknown configuration much faster than a traditional acoustic network model and with a very high accuracy. This approach has been validated in a previous study for simple combustor geometries using a classifier chain (i.e. multiple classifiers). The objective of this study is to generalise those findings by predicting the thermoacoustic stability of more complex combustor geometries using a single classifier. More advanced classification algorithms are thus required to perform that task. Two di ff erent types of Deep Neural Network architectures are tested in this study: Deep Multilayer Perceptrons with 3, 6, and 9 hidden layers, and a Convolutional Neural Network. Overall, the Convolutional Neural Network was found to be much more accurate than any of the Multilayer Perceptrons. 1. INTRODUCTION Thermoacoustic instabilities are a physical phenomenon that can occur in a large variety of combustors, such as gas turbines [1–3], boilers [4,5], rocket engines [6,7], and many others. They are due to a positive feedback loop between a heat source (usually a flame) and the surrounding acoustic waves [3,8,9]. Those instabilities are undesirable as they can cause flame extinction [10], mechanical fatigue due to increased vibrations [10, 11], and even a complete failure of the combustor [10, 12]. Combustors burning carbon-free fuels, such as hydrogen or ammonia, have been shown to be highly sensitive to thermoacoustic instabilities [13–15]. Predicting and controlling those instabilities is thus crucial to the development of clean combustion technologies as an alternative to traditional 1 r.gaudron@imperial.ac.uk 2 a.morgans@imperial.ac.uk a slaty. inter.noise 21-24 AUGUST SCOTTISH EVENT CAMPUS O ¥, ? GLASGOW CO2-emitting combustors. Various methodologies aiming at predicting the occurrence of thermoacoustic instabilities have been developed in the past few decades. They can be broadly classified into three main categories: numerical simulations, where physical variables are approximated on 2D / 3D meshes [16–19], 0D / 1D analytical models [1–3, 20], and hybrid methods combining models with simulations and / or experimental data [21–24]. Conservation equations are at the foundation of every single one of those methods. The main advantage of hard-coding conservation equations is that the corresponding predictions automatically follow a set of physical principles (e.g. "mass is constant in a closed system"). The main drawback is that conservation equations are mathematically and conceptually complicated. As a consequence, using physics-based approaches requires years of experience and access to substantial numerical resources. An alternative to traditional physics-based approaches was introduced in a recent article [25]. Classification algorithms were first trained using examples generated by a low-order physics-based network model. The thermoacoustic stability of previously unknown configurations was then successfully predicted with a very high accuracy using the trained models. The first step, called training in Machine Learning (ML) terminology, was shown to have a moderate computational cost somewhat comparable to that of traditional low-order network model tools. The second step, called inference, was shown to be extremely computationally e ffi cient (from a thousand to a million times faster than a physics-based network model tool). The first major advantage of ML-based methods is that training and inference can be performed on di ff erent machines. For instance, a cluster can be used to train the algorithms and a regular computer or even a microcontroller can be used to infer the stability of new configurations. Pre-trained models can also be shared as self-contained files on the internet, a common practice in the ML community. The final advantage of ML-based methods is that a pre-trained model can be used indefinitely to predict the stability of new configurations as long as the corresponding parameters remain reasonably close to those of the training examples. In practice, if the initial training step is done properly, re-training a model is seldom required and the only computational cost that matters is related to inference. The main objective of Gaudron & Morgans 2022 was to demonstrate that classification algorithms can be used to predict the thermoacoustic stability of combustors very e ffi ciently and with a very high accuracy [25]. This article was designed as a proof of concept and several limiting assumptions were made to simplify the problem. First, the investigated geometries were relatively simple: they all contained three elements, the flame being located at the beginning of Element 2. In order to accurately describe lab-scale and industrial combustors, more elements are usually required and the flame should be allowed to be located anywhere. Second, the stability in each frequency range of interest was determined sequentially by using a classifier chain. In other words, a first classifier was trained to predict the stability in the range [0 , 50] Hz. The binary output was then added to the inputs and another classifier was trained in the range [50 , 100] Hz, and so on. This approach maximises accuracy at the expense of computational cost. The objective of the present work is to generalise the findings of Gaudron & Morgans 2022 by using a single classifier (as opposed to a classifier chain) to predict the thermoacoustic stability of more elaborate geometries containing up to 10 elements. Deep Learning classification algorithms are employed to cope with those additional complexities. 2. DATA GENERATION USING OSCILOS 300 200 100 0 -100 -200 -300 0 100 200 300 400 500 600 700 800 900 150 100 50 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1500 1000 500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 1: An example of a randomly generated geometry comprising 8 elements (Top). The blue, red, and green lines indicate the inlet, flame, outlet locations respectively. The corresponding mean velocity (Center) and mean temperature (Bottom) are computed in OSCILOS by solving the mean conservation equations. A lot of data is required to train classification algorithms. Historical data (i.e. a list of combustor designs with their corresponding thermoacoustic stability) could be used in theory, but the amount of available data is far too limited for training purposes. Likewise, using experimental or numerical data is theoretically possible, but infeasible in practice as many thousands of configurations would need to be built or simulated, which would have an inordinate cost. The remaining option is to generate synthetic data using a 1D physics-based code. In this study, the Open Source Combustion Instability Low Order Simulator (OSCILOS) is used to assess the thermoacoustic stability of a large number of randomly generated configurations. OSCILOS is a low-order network model tool that has been used to successfully predict the thermoacoustic stability of di ff erent types of combustors [20, 22, 26, 27]. TOU 100 50 90 85 80 7K 7 ae 0 Growth rate/100 [1/s] Figure 2: The stability map corresponding to the randomly generated geometry represented in Fig. 1 as computed by OSCILOS for n = 0 . 98224 and τ = 3 . 2165 ms. A white star corresponds to a thermoacoustic mode, which can be stable or unstable depending on the growth rate. Low-order network model tools such as OSCILOS rely on solving (physics-based) conservation equations and are not artificial neural networks. OSCILOS is being developed at Imperial and is freely available on its o ffi cial website and on Github . jo) So Oo So ito) jo) Ke) So oO oO N N [zy] Aouenbery In this study, the frequencies and growth rates of all thermoacoustic modes appearing in 1 , 053 , 567 randomly generated configurations were determined using OSCILOS. Each configuration contains a random number of elements comprised between 2 and 10. The radius of each element is randomly set between r = 0 . 01 m and r = 0 . 1 m. Likewise, the length of each element is randomly initialised such that the total length of the combustor is comprised between L = 0 . 1 m and L = 1 m. A single flame is then placed at a random location in the configuration. The mean temperature jump across the flame is set to 5. For all configurati ons , the mean pressure, mean temperature, and Mach number at the inlet are set to p i = 101 , 325 Pa, T i = 293 K, and M i = 0 . 005 respectively. OSCILOS then computes the mean flow throughout the geometry by solving mean conservation equations for every element in the configuration. If the solver diverges, for instance if the mean flow becomes supersonic somewhere in the geometry, the corresponding configuration is discarded. An example of a configuration generated using this procedure is represented in Fig. 1-(Top). The corresponding mean velocity and mean temperature across the geometry are represented in Fig. 1-(Center) and Fig. 1-(Bottom) respectively. [yt -qre] Io1Ie Pye > oO Ke) lo} Xe) fo) ~ NR o o Ww Ww | 500 450 400 The thermoacoustic stability of a given configuration depends not only on the mean flow parameters, but also on the flame frequency response and acoustic boundary conditions at the inlet and outlet. The flame frequency response is described in this study using an n − τ model with n ∈ [0 . 5 , 1] and τ ∈ [0 , 5] ms. Again, those flame parameters are selected at random. Furthermore, the inlet is described as a closed end with an acoustic reflection coe ffi cient R i = 1. Conversely, the outlet is described as an open end with an acoustic reflection coe ffi cient R o = − 1. OSCILOS then computes a stability map by solving the first-order conservation equations for every element of the corresponding configuration and for frequencies in the range f ∈ [0 , 500] Hz and growth rates in the range ω ∈ [ − 500 , 500] s − 1 . A 10-dimensional output vector y is then produced to describe the thermoacoustic stability of the configuration for successive frequency ranges covering 50 Hz each. For instance, if no practically unstable mode - defined as modes with growth rates ω > − 20 s − 1 - is found in the range [0 , 50] Hz, then y 0 is set to 0. If at least one practically unstable mode is found in the range [50 , 100] Hz, then y 1 is set to 1, and so on until y 9 is set to either 0 or 1. Thermoacoustic modes with growth rates in the range ω ∈ [ − 20 , 0] s − 1 are linearly stable but small changes in the geometry, mean flow conditions, or flame parameters can make them linearly unstable, which is why they are considered to be practically unstable in this study. Figure 2 depicts the stability map obtained for the geometry represented in Fig. 1-(Top) and for n = 0 . 98224 and τ = 3 . 2165 ms. Two practically unstable thermoacoustic modes are found for frequencies f I = 163 Hz and f II = 438 Hz. The corresponding output vector is thus given by y = [0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1 , 0]. 3. DEEP LEARNING ALGORITHMS Predicting the thermoacoustic stability of a given configuration amounts to predicting its output vector y given its geometry and flame parameters (i.e. its inputs, which can be stored in a vector x or a matrix X ). Since y is multidimensional and each y i is equal to 0 or 1, this task can be performed using multilabel classification algorithms [25]. The frequency ranges in which practically unstable modes are predicted to appear can be narrowed by increasing the number of components of y , which in turn increases the complexity of the problem. There are three possible strategies to implement multilabel classification. The simplest option is to train independent binary classifiers that predict the stability in each frequency range of interest (i.e. for each component of y ). However, the presence of a thermoacoustic mode in a given frequency range tends to be correlated to the presence of modes in other frequency ranges and that information is lost by training independent models. Moreover, a model needs to be trained (and called during inference) for every frequency range of interest with this approach, which increases the computational cost associated with both training and inference. The second option is to use a classifier chain, where a first classifier is used to predict the stability in the first frequency range. The binary output (0 if no practically unstable modes are found in the frequency range, 1 otherwise), is then added to the inputs, and another classifier is trained for the next frequency range using those updated inputs. This operation is repeated until the last frequency range is reached. Using a classifier chain is a good way to capture the correlations appearing between modes in di ff erent frequency ranges. However, it is expensive to train or use one since the number of components of the input vector increases by one every time a new frequency range is considered. The third option, employed in this study, is to predict the entire output vector y with a single classifier. Correlations between modes of di ff erent frequencies are captured using this approach and the computational cost associated with both training and inference is reasonable compared to the previous options. The main drawback is that it is harder to achieve a high accuracy score while trying to predict several outputs at once as opposed to having specialised models for every frequency range of interest. 23 Inputs 200 Hidden Units 200 Hidden Units 200 Hidden Units 10 Outputs Fully Connected Fully Connected Fully Connected Fully Connected Sigmoid activation ReLu activation ReLu activation ReLu activation Figure 3: Architecture of the MLP3 algorithm. MLP3 stands for Multilayer Perceptron with 3 hidden layers. The MLP6 (respectively MLP9) architecture is obtained by further adding 3 (respectively 6) hidden layers between the input and output layers. Neural networks natively support multilabel classification since the number of neurons in the output layer is equal to the number of components of the output vector y . In other words, it is possible to define neural network architectures that predict thermoacoustic stability in every frequency range of interest at once. As mentioned above, this improvement comes at the expense of accuracy, especially for shallow neural networks. More complex neural network architectures containing several layers between the input and output layers, sometimes called Deep Neural Networks, are thus required. Two di ff erent types of Deep Neural Network architectures are investigated in the present study: Deep Multilayer Perceptrons (MLP) and a Convolutional Neural Network (CNN). The Deep Neural Networks presented in this work are trained, tuned, and tested using Python 3 code based on several external libraries, including scikit-learn [28], keras [29], keras_tuner [30], and tensorflow [31]. Deep Multilayer Perceptrons with 3, 6, and 9 hidden layers, called MLP3, MLP6, and MLP9 respectively, are introduced first. A 23-dimensional input layer contains the lengths and radii of the elements (set to zero for excess elements), as well as the gain and phase of the n − τ model and the flame position. Every layer is fully connected to the next one and the ReLu activation function is applied after the linear transformation (except for the output layer, where the sigmoid activation function is used instead). Each hidden layer contains 200 neurons while the output layer contains 10 neurons, corresponding to the 10 frequency ranges covering 50 Hz each. As an illustration, Fig. 3 represents the architecture of the MLP3 algorithm. The architectures of the MLP6 and MLP9 algorithms are then obtained by adding extra hidden layers. Inputs @ (4 × 10) 242 Feature Maps 269 Feature Maps 10,760 Hidden Units 386 Hidden Units 10 Outputs @ (4 × 10) @ (4 × 10) Flatten Fully Connected Fully Connected Dropout Rate: 0.1125 Convolution @ (3 × 7) kernel Convolution @ (3 × 3) kernel ReLu activation ReLu activation ReLu activation Sigmoid activation Figure 4: Architecture of the CNN algorithm. CNN stands for Convolutional Neural Network. Multilayer Perceptrons are notoriously prone to overfitting because their hidden layers are fully connected to the previous and next layer [32]. A more advanced architecture inspired by Convolutional Neural Networks (CNNs) used in Computer Vision [33, 34] is designed in this study to address that issue. While slightly di ff erent from traditional CNNs, this architecture still relies on 2D convolutional layers and will thus be denoted as the CNN algorithm in the remainder of this work. Instead of being arranged as a vector, the inputs are arranged as a (4 × 10) matrix. The columns of the input matrix correspond to the elements in the configuration (listed sequentially) while the corresponding physical and geometrical parameters are stored rowwise, e ff ectively creating a spatially connected representation of the geometry. The first and second row of the input matrix store the length and radius of the corresponding element (or zeros for excess elements), while the third and fourth rows contain zeros, except at the flame location where they are used to store the flame parameters, n and τ . A first convolutional layer using a (3 × 7) convolving kernel and the ReLu activation function forms 242 feature maps that have the same dimensions as the original input matrix (using zero padding). A second convolutional layer using this time a (3 × 3) convolving kernel and the ReLu activation function is used to form 269 feature maps, again of dimensions (4 × 10). Those feature maps are then flattened into a layer containing 10 , 760 neurons, fully connected to the next layer containing 386 neurons (and activated using the ReLu function). This layer is then fully connected to the output layer containing 10 neurons but activated using a sigmoid function. A dropout layer with a rate of 0.1125 is applied during training between the last hidden layer and the output layer to prevent overfitting. Figure 4 represents the architecture of the CNN algorithm used in this work. Even though the architectures of the MLP3, MLP6, MLP9, and CNN algorithms used in this study are di ff erent, the overall procedure used to train, tune, and test those neural networks is similar and will now be discussed. First, the 1 , 053 , 567 configurations that were generated using OSCILOS are randomly split between a training / cross validation set containing 80% of all configurations and a testing set containing the remaining 20%. Those two sets are disjoint to ensure that training and testing is independent. The input vectors used for the MLP algorithms are normalised by removing the mean and scaling to unit variance. Conversely, the input matrices assembled for the CNN algorithm are not normalised. The training / cross validation set is further split into a cross validation set (20%) and a training set (80%). The hyperparameters of the models are then optimised using an hyperband tuner [35] which seeks to minimise the cross-entropy loss between true labels (given by OSCILOS) and predicted labels (given by the neural networks) for the configurations in the cross-validation set after being trained using the training set. For the MLP algorithms, the only hyperparameter is the learning rate of the Adam optimisation algorithm [36]. For the CNN algorithm, additional hyperparameters include the number of filters (i.e. the number of feature maps) and kernel sizes of the convolutional layers, the number of neurons in the last hidden layer, and the dropout rate. The optimal learning rates of the MLP3, MLP6, MLP9, and CNN algorithms found using the hyperband tuner are summarised in Table 1. The batch size is always set to 128 and the algorithms are trained during 50 epochs. The MLP3, MLP6, MLP9, and CNN algorithms are then re-trained using those optimal hyperparameters and their accuracy is assessed using the testing set. Table 1: Optimal learning rates of the MLP3, MLP6, MLP9, and CNN algorithms determined using the hyperband tuner. Architecture MLP3 MLP6 MLP9 CNN η opt 2 . 10 × 10 − 3 6 . 59 × 10 − 4 5 . 51 × 10 − 4 9 . 60 × 10 − 4 Figure 5: Accuracy score obtained using the testing set for the MLP3 (blue diamonds), MLP6 (orange pentagons), MLP9 (green triangles), and CNN (red circles) algorithms with optimal hyperparameters. Figure 5 represents the accuracy scores obtained using the testing set as functions of the number of number of training examples for all four architectures investigated in this study. The thermoacoustic stability of a given configuration is considered to be correctly predicted if every individual label, corresponding to a single frequency range, is correctly predicted. The accuracy score of the MLP3 algorithm initially increases as the number of training examples increases before reaching a plateau close to 70% accuracy for around 500 , 000 training examples. This is a strong indication that the architecture of the MLP3 algorithm is too simple for the problem at hand. Indeed, the MLP6 algorithm, which contains 3 additional hidden layers, is not only more accurate than the MLP3 algorithm, but this accuracy gap increases for increasingly large training sets because the accuracy of the MLP6 algorithm never plateaus. Interestingly, the lines corresponding to the MLP6 and MLP9 algorithms are almost superimposed which means that adding extra hidden layers beyond this point does not seem to improve the predictions. The maximum accuracy of both the MLP6 and MLP9 algorithms is only slightly above 80% when assessed using the testing set, but significantly higher when assessed on the training set: those models are overfitting data. The more robust CNN algorithm fares much better comparatively: when the entire training set is used, its accuracy score assessed using the testing set exceeds 95%. As the number of configurations used to train the CNN algorithm decreases, its corresponding predictions are still significantly more accurate than the MLP6 / MLP9 algorithms trained with the same number of configurations. It is thus concluded that the Convolutional Neural Network introduced in Fig. 4 is superior to Deep Multilayer Perceptrons when it comes to predicting the thermoacoustic stability of combustors. 4. CONCLUSIONS Thermoacoustic instabilities are an undesirable physical phenomenon that can a ff ect many di ff erent types of combustors. Physics-based approaches are traditionally used to predict the occurrence of those instabilities. While highly e ff ective, those approaches can be computationally expensive and require a lot of domain knowledge. A novel methodology for predicting the thermoacoustic stability of combustors based on classification algorithms was introduced in a previous article. This new data-driven approach, which was shown to be highly accurate and computationally inexpensive, was based on a chain of binary classifiers trained on simple combustor configurations. The present study generalised those findings by using a single classifier to predict the stability of more complex combustor geometries in multiple frequency ranges at once. This was achieved by considering two di ff erent types of Deep Neural Network architectures: Deep Multilayer Perceptrons with 3, 6, and Accuracy [%] 100 90 80 70 60 50 == MLP3 =—® MLPG =F MLPY 4 6 Training examples (x 100,000) =O CNN 10 9 hidden layers and a Convolutional Neural Network. It was found that the Convolutional Neural Network was much more accurate than the Multilayer Perceptrons whatever the size of the training set. An interesting extension of this work would be to design physics-informed Deep Neural Networks as a way to significantly reduce the number of configurations required during training. ACKNOWLEDGEMENTS The authors would like to gratefully acknowledge the European Research Council (ERC) Consolidator Grant AFIRMATIVE (2018–2023), grant number 772080, for supporting this research. REFERENCES [1] J. J. Keller. Thermoacoustic oscillations in combustion chambers of gas turbines. AIAA J. , 33(12):2280–2287, 1995. [2] C. O. Paschereit and W. Polifke. Investigation of the thermoacoustic characteristics of a lean premixed gas turbine burner - 98-GT-582. In Proc. ASME Turbo Expo 1998 , pages 1–10, 1998. [3] A. P. Dowling and S. R. Stow. Acoustic analysis of gas turbine combustors. J. Propul. Power , 19(5):751–764, 2003. [4] A. Putnam. Combustion driven oscillations in industry . Elsevier, New York, 1971. [5] F. L. Eisinger and R. E. Sullivan. Avoiding thermoacoustic vibration in burner / furnace systems. J. Press. Vessel Technol. , 124(4):418–424, 2002. [6] L. Crocco, J. Grey, and Harrje D. Theory of liquid propellent rocket combustion instability and its experimental verification. J. Am. Rocket Soc. , 30(2):159–168, 1960. [7] D. T. Harrje and F. H. Reardon. Liquid propellant rocket combustion instability. Technical report, NASA, 1972. [8] J. W. S. Rayleigh. The explanation of certain acoustical phenomena. Nature , 18:319–321, 1878. [9] T. C. Lieuwen. Unsteady combustor physics . Cambridge University Press, 2005. [10] S. Candel. Combustion dynamics and control: progress and challenges. Proc. Combust. Inst. , 29:1–28, 2002. [11] K. R. McManus, T. Poinsot, and S. Candel. A review of active control of combustion instabilities. Prog. Energy Combust. Sci. , 19:1–29, 1993. [12] Thierry Poinsot and Denis Veynante. Theoretical and numerical combustion . 2001. [13] E. Aesoy, J. G. Aguilar, M. R. Bothien, N. Worth, and J. Dawson. Acoustic-convective interference in Transfer Functions of methane / hydrogen and pure hydrogen flames. J. Eng. Gas Turbines Power , 2021. [14] J. Beita, M. Talibi, S. Sadasivuni, and R. Balachandran. Thermoacoustic instability considerations for high hydrogen combustion in lean premixed gas turbine combustors: a review. Hydrogen , 2(1):33–57, 2021. [15] Z. Lim, J. Li, and A. S. Morgans. The e ff ect of hydrogen enrichment on the forced response of CH4 / H2 / Air laminar flames. Int. J. Hydrog. Energy , 46(46):23943–23953, 2021. [16] Y. Huang, H. G. Sung, S. Y. Hsieh, and V. Yang. Large-eddy simulation of combustion dynamics of lean-premixed swirl-stabilized combustor. J. Propul. Power , 19(5):782–794, 2003. [17] P. Schmitt, T. Poinsot, B. Schuermans, and K. P. Geigle. Large-eddy simulation and experimental study of heat transfer, nitric oxide emissions and combustion instability in a swirled turbulent high-pressure burner. J. Fluid Mech. , 570:17–46, 2007. [18] G. Sta ff elbach, L. Y. M. Gicquel, G. Boudier, and T. Poinsot. Large eddy simulation of self excited azimuthal modes in annular combustors. Proc. Combust. Inst. , 32 II(2):2909–2916, 2009. [19] G. Boudier, N. Lamarque, G. Sta ff elbach, L. Y. M. Gicquel, and T. Poinsot. Thermo- acoustic stability of a helicopter gas turbine combustor using large-eddy simulations. Int. J. Aeroacoustics , 8(2):69–94, 2009. [20] R. Gaudron, D. Yang, and A. S. Morgans. Acoustic energy balance during the onset, growth and saturation of thermoacoustic instabilities. J. Eng. Gas Turbines Power , 2020. [21] W. Polifke, A. Poncet, C.O. Paschereit, and K. Döbbeling. Reconstruction of acoustic transfer matrices by instationary computational fluid dynamics. J. Sound Vib. , 245:483–510, 2001. [22] X. Han, J. Li, and A. S. Morgans. Prediction of combustion instability limit cycle oscillations by combining flame describing function simulations with a thermoacoustic network model. Combust. Flame , 162(10):3632–3647, 2015. [23] F. Ni, M. Miguel-Brebion, F. Nicoud, and T. Poinsot. Accounting for acoustic damping in a Helmholtz solver. AIAA J. , 55(4):1205–1220, 2017. [24] M. Merk, C. Silva, W. Polifke, R. Gaudron, M. Gatti, C. Mirat, and T. Schuller. Direct assessment of the acoustic scattering matrix of a turbulent swirl combustor by combining System Identification, Large Eddy Simulation and analytical approaches. J. Eng. Gas Turbines Power , 141(2), 2019. [25] R. Gaudron and A. S. Morgans. Thermoacoustic stability prediction using classification algorithms. Data-Centric Eng. , 2022. [26] J. Li, Y. Xia, A. S. Morgans, and X. Han. Numerical prediction of combustion instability limit cycle oscillations for a combustor with a long flame. Combust. Flame , 185:28–43, 2017. [27] Y. Xia, D. Laera, W. P. Jones, and A. S. Morgans. Numerical prediction of the Flame Describing Function and thermoacoustic limit cycle for a pressurised gas turbine combustor. Combust. Sci. Tech. , 191(5-6):979–1002, 2019. [28] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. , 12:2825–2830, 2011. [29] F. Chollet. Keras, 2015. [30] T. O’Malley, E. Bursztein, J. Long, F. Chollet, H. Jin, and L. Invernizzi. Keras tuner, 2019. [31] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, J. Yangqing, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: large-scale Machine Learning on heterogeneous systems, 2015. [32] S. Haykin. Neural networks: a comprehensive foundation . Prentice Hall, 1998. [33] K. Fukushima. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition una ff ected by shift in position. Biol. Cybern. , 36(4):193–202, 1980. [34] T. Homma, L. E. Atlas, and R. J. Marks. Artificial neural network for spatio-temporal binary patterns: application to phoneme classification. page 21, 1987. [35] L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. , 18:1–52, 2018. [36] D. P. Kingma and J. Ba. Adam: a method for stochastic optimization. arXiv preprint , 2017. Previous Paper 30 of 769 Next