A A A Volume : 44 Part : 2 A machine learning- and compressed sensing-based approach for surrogate modelling in environmental acoustics: towards fast evaluation of building façade road tra ffi c noise levelsSacha Baclet 1The Centre for ECO 2 Vehicle Design, The Marcus Wallenberg Laboratory for Sound and Vibration Research (MWL), KTH Royal Institute of Technology, SE-100 44, Stockholm, SwedenHamza Bouchouireb The Centre for ECO 2 Vehicle Design, Department of Engineering Mechanics, KTH Royal Institute of TechnologySiddharth Venkataraman The Centre for ECO 2 Vehicle Design, The Marcus Wallenberg Laboratory for Sound and Vibration Research (MWL), KTH Royal Institute of TechnologyErik R. Gomez The Marcus Wallenberg Laboratory for Sound and Vibration Research (MWL), KTH Royal Institute of TechnologyABSTRACT State-of-the-art urban road tra ffi c noise propagation simulation methods such as the CNOSSOS-EU framework rely on ray tracing to estimate noise levels at specific locations on façades, so-called receiver points; this method is relatively computationally expensive and its cost increases with the number of receiver points, which limits the spatial resolution of such simulations in the context of real-time or near-real-time urban noise simulation applications. This contribution aims to investigate the applicability of multiple data-driven methods to the surrogate modelling of tra ffi c noise propagation for fast façade noise calculation as an alternative to these traditional, ray-tracing-based methods. The proposed approach uses compressed sensing to select a small subset of optimal receiver points from which the dataset of the entire façade may be reconstructed, associated with either a kriging model or neural networks, used to predict noise levels for these sensors. The prediction performance of each of these steps is evaluated on an academic test case, with two levels of complexity based on the dimensionality of the problem.1 baclet@kth.sea slaty. inter.noise 21-24 AUGUST SCOTTISH EVENT CAMPUS O ¥, ? GLASGOW 1. INTRODUCTIONRoad tra ffi c noise is an increasingly major concern in urban areas for its impact on health and comfort. To tackle this issue, mitigate its impacts, and measure the e ff ectiveness of mitigation measures, it is necessary to understand how people are currently being exposed to noise. In this context, tra ffi c noise mapping is one of the tools that can be used to assess noise exposure. A conventional method for the generation of tra ffi c noise maps is based on ray tracing. It is for instance used by CNOSSOS-EU [1], the standard framework for producing noise maps in the EU. This method is relatively computationally expensive. This is acceptable when generating maps every 5 years, as required by EU regulation [2]. However, recent works have shown that dynamic noise mapping is better at characterising noise exposure [3], thereby prompting a need for more frequent noise mapping.The calculation times that used to be negligible have therefore become a limiting factor in the generation of dynamic or near-real-time noise maps. Several conventional approaches have been explored to reduce this computation time, such as interpolating noise measurements [4], or storing the computed attenuation between source positions and receiver positions [5,6]. Nonetheless, even using these methods, it would still be too computationally expensive to describe the sound levels on the façades of the buildings of a city in real time with a very high spatial resolution (e.g. 1 meter), as the high number of paths between all possible source positions and receiver positions would be prohibitively costly to calculate and store.Data-driven methods could be a potential solution to speed up these simulations. Indeed, data-driven methods, including machine learning (ML), are increasingly used in multiple fields of acoustics, with good performance compared to conventional processing in many scenarios [7]. In the field of environmental noise, applications of data-driven methods include the use of kriging and principal component analysis (PCA) for the statistical interpolation of noise measurements [8] and for an urban noise mapping meta-model [9]; the use of neural networks for the processing of captured urban sound [10], as an alternative to A-weighted noise mapping [11], for sound classification [12], or long-range sound propagation [13]. To the knowledge of the authors, compressed sensing has not yet been applied to the field of environmental acoustics, and data-driven methods in general have not been applied for the purpose of speeding up façade tra ffi c noise simulations with high spatial resolution.This contribution thus aims to explore the applicability of multiple data-driven methods, including either kriging or neural networks associated with compressed sensing for fast façade noise calculation, as an alternative to the conventionally used ray-tracing-based methods. For this purpose, the next sections will describe and evaluate a preliminary surrogate model for fast façade noise calculation, applied to an academic test case. First, a description of the chosen academic test case is given, and the two explored levels of geometric complexity are explained. The following section details the main methodological steps, which include the generation of training and validation data, the identification of optimal receiver locations using compressed sensing, kriging and neural networks for surrogate modelling, and the reconstruction of noise levels from the surrogate models’ predictions. Finally, the error resulting from each step is presented and discussed.2. DESCRIPTION OF THE TEST CASE AND MODEL COMPLEXITYIn order to test the performance of the surrogate model, a simplified noise propagation problem is defined. This test case is illustrated through the geometrical model shown in Fig. 1. A single source of noise is assumed, and this source is positioned within a simplified urban infrastructure. The urban Figure 1: Geometric illustration of the academic test case. Input is a parametrised description of source position and environment. Output is the noise levels at a receiver grid.infrastructure consists of a single straight road segment that runs between two parallel rectangular buildings positioned on the opposite sides of this road. A rectangular grid of receiver points, placed 1 meter apart and 0.5 meter from the wall, where sound levels are to be estimated, is positioned along one of the façades facing the road. The noise source is assumed to be a single vehicle on the road segment.The input of the surrogate model is a parametrised description of the urban infrastructure and source position. The model then predicts the noise levels at the rectangular grid of receiver positions. Two levels of modelling complexity are considered for the surrogate model’s input, hereby referred to as level 1 and level 2. The two levels are described in terms of their input and output variables in Fig. 1 and in the following table:Level 1 Level 2Input Source position ( x , y ) Source position ( x , y ) and building heights ( H 1 , H 2 )Output Noise levels at receiver grid S1 Noise levels at receiver grid S13. METHODOLOGYThis contribution uses two combined but independent data-driven approaches to achieve the task of estimating façade noise levels: compressed sensing for dimensionality reduction, and kriging or neural networks for predicting the noise levels. The procedure for implementing these approaches within the context of this work is described using a flowchart in Fig. 2.The main components highlighted in the flowchart in Fig. 2 are described below:– (I) The first component is the generation of input and output data for the training of the surrogate model (Section 3.2). The generation of output data requires running noise propagation simulations using NoiseModelling – an open-source environmental noise mapping tool (Section 3.1). The relevant input parameters to this tool (simulation input) along with the corresponding relevant output noise levels (simulation output) form the entire dataset. This simulated data is split into a “training dataset” and a “validation dataset”, respectively dedicated to the training and the evaluation of the performance of the surrogate model.Geometrical Complexity pes Level 1: a wat A Level 2: + Output: 1 Receiver grid Figure 2: Flowchart illustrating the project workflow and its main components. (I) is the data generation, (II) is compressed sensing, (III) is the surrogate modelling, and (IV) is the reconstruction of the compressed signal.– (II) The second component is compressed sensing (Section 3.3). The façade noise levels that are simulated by the software have a high dimensionality ( ∼ 200 receiver points) and show strong spatial patterns. This allows for a dimensionality reduction of the simulation output, using the technique of compressed sensing based on singular value decomposition (SVD) and QR decomposition, as described by K. Manohar et al. [14]. In this work, compressed sensing consists in selecting an optimal set of receiver positions on the façade that can be used for reconstructing the noise levels over the entire façade (with an error related to the chosen number of optimal sensors). Given a choice of optimal sensors, only the simulation output at these sensors may be used as the input data for training the surrogate models.– (III) The third component is the training of the surrogate model. The target data for the training is the simulation output at the optimal receiver positions determined using compressed sensing. Three types of surrogate models are trained: single layer neural network, deep neural network, and kriging (Section 3.4).– (IV) The final component of the workflow is the reconstruction of the surrogate model predictions (in reduced dimensions) for the entire façade. The procedure for reconstruction is coupled to the compressed sensing technique implemented in step (II). The reconstruction adds an error to the façade noise levels that is independent of that from the surrogate model.The final prediction of the façade noise levels includes error from two sources: compressed sensing and surrogate modelling. The error between expected and predicted / reconstructed values is calculated at three stages along the workflow (marked as ∆ in Fig. 2) using the validation dataset: (i) after compressed sensing reconstruction alone; (ii) after surrogate modelling alone; (iii) after reconstruction using the surrogate model predictions (total error). This allows to not only check the overall accuracy, but also the relative contributions from the two error sources. For each stage of error quantification, the root mean-squared error (RMSE) and mean absolute error (MAE) are calculated. 3.1. NoiseModelling NoiseModelling is an open-source noise mapping tool which, at the time of writing, is almost compliant with the CNOSSOS-EU standard. In the present contribution, the method used for noise propagation is thus that defined by the CNOSSOS-EU framework, although the methodology is not limited to this framework and could be applied to other propagation methods. This method of calculation of the attenuation is a nonlinear process. No noise emission model was required, as the CNOSSOS-EU framework is only used to calculate the noise attenuation between a source with a constant sound power level (SWL) and receiver points. This attenuation only depends on the geometry of the area and not on the noise emission model of the source. Any source model can thus be used subsequently to calculate actual noise levels using the attenuation. NoiseModelling outputs the attenuation levels summed over the entire spectrum as a single value (in dB). This output is post-processed to obtain a two-dimensional matrix containing sound levels where each row is a vehicle position and each column is a receiver, in the case of level 1. For level 2, the data structure is more complex as more input parameters (including the heights of the buildings) are included, therefore the data is stored in a Pandas DataFrame.3.2. Data generation In order to generate the di ff erent datasets for the surrogate models, a sampling plan is necessary. An intuitive rule of thumb can be used in generating such a sampling plan: a uniform level of surrogate model accuracy throughout the sampling space requires a uniform spread of sample points in the aforementioned space. One way of doing just this is to use Latin Hypercube Sampling (LHS). This approach consists in splitting the sampling space into equally sized hypercubes, also known as bins, in each dimension, and placing unique points in the bins, in such a way that there is only one point per hyperplane along each of the dimensions. This approach is used for the generation of source positions for level 1, as well as for the generation of the four-dimensional sample plans including the heights of the buildings and the source locations for level 2. The sound pressure levels (SPLs) data was generated separately for each level. An example of the obtained data is given in Fig. 3, for level 1. Table 1 details the number of data points generated for each level.Figure 3: Top: SPLs (dB) on the façade for level 1, for a given source position. Bottom: a top view of the buildings and the road. The star indicates the source position. The building on top is the one where the receiver points are located. Level Sources Building heights DP per receiver Receivers Total DP (in millions)1 10,000 - 10,000 180 1.82 1,000 196 196,000 189 37Table 1: Number of source positions, buildings heights, receiver points, and resulting data points (DP) generated for each level. The total number of data points is presented in millions.3.3. Compressed sensing Compressed sensing is a dimensionality reduction method that enables the reconstruction of an entire dataset from a relatively small subset of data with good accuracy, using optimization techniques. Our method uses compressed sensing that is based on the singular value decomposition (SVD) of the data matrix and the QR decomposition permutation matrix [14]. The dataset is sparse in the basis provided by the SVD, with only a few dominating terms among the singular values if there is enough redundancy in the dataset, which allows for the use of compressed sensing. This hypothesis is verified in Section 4.1. The QR decomposition permutation matrix is then used for the selection of the optimal sensors; the number of optimal sensors can be chosen arbitrarily, but a higher number leads to lower errors during the reconstruction process. In the context of this contribution, compressed sensing is to be applied to the grid of sensors on the façade in order to optimally select a subset of sensors from which the levels simulated for the entire grid of sensors can be reconstructed. Subsequently, the surrogate models would only need to be trained on the data generated for this subset of receiver points, which would significantly reduce training time.3.4. Architecture choice for the surrogate model Single Layer Neural Network: A single layer neural network (1-NN), considered to be the most basic type of neural architecture, is chosen as a benchmark for comparing with other data-driven approaches. It comprises a single layer with 64 neurons with a rectified linear unit (ReLU) activation function. The size of the input depends on the problem complexity: level 1 has two inputs, while level 2 has four. Similarly, the size of the output layer depends on the number of optimal sensors used for compressed sensing. The output layer uses a linear activation function. The network is trained using the stochastic gradient descent method Adam optimizer together with a mean-squared-error (MSE) loss function.Deep Neural Network: A Deep Neural Network (DNN) is chosen as one of the investigated architectures to generate the surrogate model. The DNN consists of a normalized input-layer, followed by four dense layers made of 64 neurons using a ReLU activation function. Finally, a dense output-layer with a linear activation function and a size corresponding to the chosen number of optimal receivers on the façade is used. After early experimentation with this DNN, it appears that the prediction error is reduced when the input sound pressure levels (SPLs), in decibels, are converted to pascals (“unlogged”) prior to the training process. This step is thus applied to the input data. Like the 1-NN, the network is trained using the Adam optimizer together with an MSE loss function.Kriging: Kriging is chosen as another investigated architecture for the surrogate model. The kriging surrogate is based on the sum of the realization of a regression model and a stochastic process. It takes the following form: ˆ y ( x ) = P k j = 1 β j f j ( x ) + Z ( x ), where f j are known functions and β j are the regression parameters, and the random process Z is a realization of a stochastic process with zero mean and a spatial covariance function that is assumed to have the following structure cov [ Z ( x (1) ) , Z ( x (2) )] = σ 2 R ( x (1) , x (2) ). σ 2 is the random process variance, and R is the multidimensional correlation function. Here, this function is obtained from the product of one-dimensional Gaussian correlation functions given by e − θ l ( x (1) l − x (2) l ) 2 , with l ∈ ⟦ 1 , n ⟧ . θ l are hyperparameters that relate to the correlation between the observations. The kriging predictor is given by ˆ y ( x ) = f ( x ) T β ∗ + r ( x ) T γ ∗ , where f ( x ) = [ f 1 ( x ) , ..., f k ( x )]; β ∗ is the generalised least squares estimate of β ; and r ( x ) is the correlation vector between Z ( x ) and Z ( s j ) that is evaluated at the input training sites ( s j ) 1 ≤ j ≤ m . The di ff erent parameters needed for the full definition of the predictor are given by the following:β ∗ : = ( F T R − 1 F ) − 1 F T R − 1 Y , (1)γ ∗ : = R − 1 ( Y − F β ∗ ) , (2)θ ∗ : = arg min θ { det( R ) 1 m σ 2 } , (3)with F corresponding to the expanded m × k design matrix F i , j = f j ( s i ), and Y corresponding to the acoustic response at all receiver locations for the input training sites. The structure of the kriging surrogate is strong and particularly well suited to smooth functions, especially when using the Gaussian correlation function. On the other hand, one of the limitations of kriging is in the density of the input data, which, in the box-constrained case of this work, translates into an upper limit in the amount of input samples that can be used for model training. Indeed, as the input data becomes more dense, the correlation matrix R can become ill-conditioned and the Cholesky factorization could therefore fail. In this work, the kriging model of the SMT Surrogate Modeling Toolbox [15] is used.4. RESULTS4.1. Singular values The simulated data consists in a matrix containing sound levels for all receivers. From the geometry of the problem, neighbouring receiver points are expected to exhibit correlated sound levels and thus redundant information. To test this hypothesis and determine whether the use of compressed sensing is sensible, a singular value decomposition (SVD) is applied to the data matrices. The ordered singular values are plotted in Figs. 4 and 5 for levels 1 and 2 respectively.20 40 60 80 100 120 140 160 180Figure 4: Ordered singular values for level 1Figure 5: Ordered singular values for level 2The first singular values carry most of the energy in both levels; it appears that compressed sensing may be used to enable training the surrogate models on a subset of the sensors with satisfactory results. This possibility is explored in the next section.20 40 60 80 100 120 140 160 180 4.2. Compressed sensing error Compressed sensing is not a lossless compression method; as a result, we expect to see some error from reconstructing the sound levels for the entire array of receiver points from a subset of the latter. To evaluate this error, we select the n r optimal receiver points for multiple values of n r for each level and reconstruct the sound levels on the entire façade using only the data from these receiver points and compressed sensing. n r is later referred to as the “rank” of the compressed sensing. For illustration purposes, the location of the determined optimal receiver points is given in Fig. 6 for level 1, for ranks 5 and 10. It means that for these ranks, these locations are the best to reconstruct noise levels on the entire façade with minimal error. In this section, the error is measured in terms of average absolute error (MAE) in dB. The optimal receiver points are selected using the training dataset, while the reconstruction and error estimation is done using the validation dataset. The error is plotted in Fig. 7 for levels 1 and 2. From these results, it appears that compressed sensing is more e ffi cient with the data generated for level 1, as the reconstruction from 5 receiver points in level 1 yields the same average error as the reconstruction from 10 receiver points in level 2.error (dB) 107 average 95th perc 99th pere max 10 15(a) rank 5(b) rank 10Figure 6: Location of the optimal receiver points (red) on the façade for (a) rank 5 and (b) rank 10, for level 1.lhe he sh de te te ee oh ee(a) level 1(b) level 2Figure 7: Average absolute error (MAE), 95 th percentile error, 99 th percentile error and maximum error generated by compressed sensing reconstruction from n r optimal receiver points for levels (a) 1 and (b) 2.jeeoseesce ee ecceeee etcccceee etcccceee errr?For level 1, the maximum error in the entire matrix is less than 10 dB for 2 or more sensors, and gets as low as 1 dB from about 30 sensors, while the average absolute error in the matrix is much lower, at less than 1 dB for 2 or more sensors, and around 0.05 dB for 30 or more sensors. The plots for level 2 are similar to level 1, although the error due to the reconstruction is about twice as high – which is expected, given the increased complexity of the problem. Thus, it appears that the reconstruction process is able to reconstruct sound levels for all vehicle positions from the validation dataset reasonably well with a small subset of receiver points, without extreme outliers.error (dB) 107 average 95th perc 99th pere max 10 15 40 4.3. Surrogate modelling error In this section, the prediction error of the three surrogate models alone is presented. The error is computed on the validation dataset. The error metrics used are the root mean squared error (RMSE), the mean average percentage error (MAPE) and the mean average error (MAE).Model 1-NN DNN KrigingTraining dataset size (thousands) 8 8 1 5 8RMSE 0.31 0.1 0.15 0.1 0.062MAPE 0.29 0.084 0.08 0.045 0.032MAE 0.23 0.066 0.1 0.06 0.024Table 2: Error of surrogate modeling alone for level 1 for a single receiver.Table 2 presents the error when predicting sound levels at a single receiver position, for level 1. Unsurprisingly, the 1-NN model performs the worst. With an 8,000 point training dataset, the kriging model outperforms the DNN model, while with only 5,000 training data points, the kriging model has approximately the same accuracy as the DNN model. In this case, the kriging method thus requires a smaller set of training data points than the neural network-based methods to achieve a comparable performance level.Model 1-NN DNN KrigingTraining dataset size (thousands) 196 196 1 5 8RMSE 0.274 0.041 0.2 0.14 0.017MAPE 0.25 0.025 0.079 0.055 0.0033MAE 0.2 0.02 0.1 0.07 0.0041Table 3: Error of surrogate modelling alone for level 2 for a single receiver.Table 3 presents the same metrics at a single receiver position, for level 2. Once again, the 1- NN model performs the worst, while the kriging model can achieve the same accuracy as the DNN model with a smaller amount of training data. However, the accuracy of the kriging model could not be improved by using a larger training sample size than 8,000 as it would lead to ill-conditioning problems that would either adversely impact the accuracy of the model or prevent it from converging. It is worth noting that this input density limitation concerns the overall input regardless of the number of receivers that are modelled. Therefore, the accuracy that is observed in the case of predicting the behavior of a single receiver is expected to significantly decrease as the number of receiver points to predict concomitantly increases. Although the prediction error is low in the single receiver case for all models, the goal is to predict sound pressure levels at all receiver positions. Using compressed sensing, it is possible to reduce the number of receiver points where the surrogate model needs to predict noise levels – and thus the number of receivers whose data is used for the training – while still being able to reconstruct the full façade noise levels, which reduces the training time.Table 4 shows the error metrics for the prediction of the 10 optimal receivers (“rank 10 ”) at level 1 for the surrogate modeling approaches, while Table 5 presents similar results for level 2. It can be observed that, for level 1, the errors associated with rank 10 receivers are higher than the ones obtained for the single receiver, as the models now need to predict the values at multiple receivers simultaneously. Model 1-NN DNN KrigingTraining dataset size (thousands) 8 8 1 5 8RMSE 0.5 0.19 0.2 0.12 0.1MAPE 0.48 0.17 0.16 0.066 0.052MAE 0.38 0.13 0.12 0.07 0.067Table 4: Error of surrogate modeling alone for level 1 and 10 optimal receivers.Model 1-NN DNN KrigingTraining dataset size (thousands) 196 196 1 5 8RMSE 0.46 0.1 0.34 0.29 0.25MAPE 0.44 0.085 0.23 0.2 0.16MAE 0.34 0.066 0.3 0.25 0.21Table 5: Error of surrogate modeling alone for level 2 and 10 optimal receivers.On the other hand, the kriging model, which performed better than the DNN one for the prediction of the level 1 rank 10 receivers, is performing much worse for level 2 and rank 10. Indeed, the larger dataset associated with this level benefits the DNN model, which can capitalise on the full 196,000 data points available per receiver, whereas the kriging model is still limited to 8,000 training data points for the entire set of rank 10 receivers. As a reminder, these errors only represent the prediction performance of the SPLs at the 10 selected receiver points and not the total prediction error, which also includes the error due to compressed sensing reconstruction.4.4. Overall prediction error: reconstruction of the surrogate-predicted SPLs This section presents the total prediction error that results from predicting sound levels for a selection of optimal receivers, and then reconstructing noise levels for the entire façade from these predictions. This overall prediction error is composed of the error from compressed sensing (CS), and the error from the surrogate model. For a particular rank and surrogate model, the overall prediction error has a lower limit that is determined by the component with the larger error magnitude. The overall prediction error thus falls under one of the following three regimes, depending on the magnitude of the CS error and surrogate model error: a) CS error dominates, b) CS error and surrogate model error are comparable, and c) surrogate model error dominates. The overall prediction error, depending on the rank and the type of surrogate model, is presented in Tables 6 and 7 for levels 1 and 2 respectively, along with the CS error alone for comparison. In both levels, the overall prediction error using kriging and the DNN appears to be dominated by the CS error for lower ranks, while the error from surrogate modelling becomes more prominent as the rank increases. For instance, at level 1, rank 10, the overall RMSE when using kriging or DNN (resp. 0.19 and 0.21) are almost identical to the error from CS alone (0.20), which implies that at this rank, it is CS that limits the accuracy of the model. However, at level 1, rank 100, the overall RMSE using kriging or DNN (resp. 0.11 and 0.18) is noticeably higher than the CS error (0.03); here, the surrogate models are the factors limiting the accuracy of the overall prediction. All in all, the overall error when using kriging or DNN is comparable for levels 1 and 2, with a slight advantage for kriging in lower ranks for level 1, while the DNN model outperforms kriging for higher ranks in level 2. The error when using the 1-NN model is always much larger than for either of the other two models n r RMSE MAE [dB]CS KRG + CS 1-NN + CS DNN + CS CS KRG + CS 1-NN + CS DNN + CS5 0.32 0.31 0.61 0.34 0.21 0.21 0.44 0.2410 0.20 0.19 0.56 0.21 0.13 0.13 0.39 0.1440 0.09 0.12 0.65* 0.17 0.05 0.07 0.39* 0.12100 0.03 0.11 0.48 0.18 0.02 0.05 0.32 0.13Table 6: RMSE and MAE for all receivers from the surrogate modelling-based compressed sensing reconstruction, as well as compressed sensing reconstruction alone for level 1. Note: For the starred values, the CS reconstruction appears to unexpectedly increase the total error (1-NN + CS reconstruction) despite the 1-NN error alone following the expected trend (decreasing error with increasing rank).n r RMSE MAE [dB]CS KRG + CS 1-NN + CS DNN + CS CS KRG + CS 1-NN + CS DNN + CS5 0.52 0.51 0.65 0.53 0.36 0.35 0.50 0.3610 0.32 0.31 0.53 0.32 0.21 0.21 0.40 0.2240 0.14 0.24 0.41 0.16 0.08 0.16 0.29 0.11Table 7: RMSE and MAE for all receivers and parameters from the surrogate modelling-based compressed sensing reconstruction, as well as compressed sensing reconstruction alone for level 2.or CS alone, which suggests that it is not a good solution to solve this problem. Nonetheless, it should be noted that the increased amount of training data for level 2 allows the overall error when using the 1-NN model to be comparable or smaller than for level 1.5. CONCLUSIONThis work confirms the feasibility of a surrogate model using compressed sensing associated with either kriging or neural networks for very fast façade noise calculation with high spatial resolution. The obtained results encourage a scale-up of this method to larger, more complex geometries. In both considered configurations (levels 1 and 2), compressed sensing was successfully applied to reconstruct the entire array of sensors from a subset of just a few percent of the total number of sensors, with low error and few outliers. This method has significantly reduced the computational burden of the training for kriging and neural networks, making it possible to consider an extension of this method to geometries larger than the simple model considered in this contribution, and the use of compressed sensing in other problems relating to environmental noise. As the first application of compressed sensing in this field, this contribution might be of interest to the environmental acoustics community. Surrogate modeling using kriging or neural networks has proven successful to predict noise levels on a façade with good accuracy, faster than through the use of ray tracing-based methods. Kriging, which maintained high accuracy with small amounts of training data in the considered test case – and even outperformed the other models in lower dimensions –, associated with compressed sensing to reduce the number of receivers where noise levels need to be predicted, appears to be well indicated as a first step towards an accurate surrogate model for fast simulation of façade noise levels. If greater accuracy is desired for problems with high dimensionality, a DNN, trained with more data, shows the highest potential at the cost of longer training times. Finally, a limited number of high fidelity FEM-based simulations could be used on top of the ray tracing datasets in order to improve the overall accuracy of the neural network-based framework through the use of transfer learning.REFERENCES[1] Stylianos Kephalopoulos, Marco Paviotti, and Fabienne Anfosso-Lédée. Common noise assessment methods in europe (CNOSSOS-EU), 2012. [2] Directive 2002 / 2049 / EC of the European Parliament and the Council of 25 june 2002 relating to the assessment and management of environmental noise. O ff . J. Eur. Communities , 189(12):12– 25, 2002. [3] A. Can, E. Chevallier, M. Nadji, and L. Leclercq. Dynamic tra ffi c modeling for noise impact assessment of tra ffi c strategies. Acta Acustica United with Acustica , 96(3):482–493, 2010. [4] Weigang Wei, Timothy Van Renterghem, Bert De Coensel, and Dick Botteldooren. Dynamic noise mapping: A map-based interpolation between noise measurements with high temporal resolution. Applied Acoustics , 101:127–140, 2016. [5] Sacha Baclet, Siddharth Venkataraman, Romain Rumpler, Robin Billsjö, Johannes Horvath, and Per Erik Österlund. From strategic noise maps to receiver-centric noise exposure sensitivity mapping. Transportation Research Part D: Transport and Environment , 102:103114, 2022. [6] Ziqin Lan, Canming He, and Ming Cai. Urban road tra ffi c noise spatiotemporal distribution mapping using multisource data. Transportation Research Part D: Transport and Environment , 82:102323, 2020. [7] Michael J. Bianco, Peter Gerstoft, James Traer, Emma Ozanich, Marie A. Roch, Sharon Gannot, and Charles-Alban Deledalle. Machine learning in acoustics: Theory and applications. The Journal of the Acoustical Society of America , 146(5):3590–3628, 2019. [8] Pierre Aumond, Arnaud Can, Vivien Mallet, Bert De Coensel, Carlos Ribeiro, Dick Botteldooren, and Catherine Lavandier. Kriging-based spatial interpolation from measurements for sound level mapping in urban areas. The Journal of the Acoustical Society of America , 143(5):2847–2857, 2018. [9] Antoine Lesieur, Pierre Aumond, Vivien Mallet, and Arnaud Can. Meta-modeling for urban noise mapping. The Journal of the Acoustical Society of America , 148(6):3671–3681, 2020. [10] Marc Green and Damian Murphy. Environmental sound monitoring using machine learning on mobile devices. Applied Acoustics , 159:107041, 2020. [11] Tatiana Alvares-Sanches, Patrick E. Osborne, and Paul R. White. Mobile surveys and machine learning can improve urban noise mapping: Beyond a-weighted measurements of exposure. Science of The Total Environment , 775:145600, 2021. [12] Wenjie Mu, Bo Yin, Xianqing Huang, Jiali Xu, and Zehua Du. Environmental sound classification using temporal-frequency attention based convolutional neural network. Scientific Reports , 11(1), 2021. [13] Carl R. Hart, D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-learning of long-range sound propagation through simulated atmospheric turbulence. The Journal of the Acoustical Society of America , 149(6):4384–4395, 2021. [14] Krithika Manohar, Bingni W Brunton, J Nathan Kutz, and Steven L Brunton. Data-driven sparse sensor placement for reconstruction: Demonstrating the benefits of exploiting known patterns. IEEE Control Systems Magazine , 38(3):63–86, 2018. [15] Mohamed Amine Bouhlel, John T. Hwang, Nathalie Bartoli, Rémi Lafage, Joseph Morlier, and Joaquim R. R. A. Martins. A Python surrogate modeling framework with derivatives. Advances in Engineering Software , page 102662, 2019. Previous Paper 798 of 808 Next