A A A Audio augmentation of car journeys to improve occupants’ well-being Zuzanna Podwinska 1 Acoustics Research Centre, University of Salford The Crescent, Salford M5 4WT, England Lara Harris Acoustics Research Centre, University of Salford The Crescent, Salford M5 4WT, England Andrew Jackson Bentley Motors Pyms Lane, Crewe, Cheshire, CW1 3PL, England Connor Welham Acoustics Research Centre, University of Salford The Crescent, Salford M5 4WT, England Andrew Elliott Acoustics Research Centre, University of Salford The Crescent, Salford M5 4WT, England ABSTRACT Car interiors are often designed with the aim of being as quiet as possible. This has the benefit of eliminating most of the unwanted sound such as engine or tyre noise, but it also blocks out environmental sounds which might be perceived as positive and even desirable. Bringing in some of these positive sounds – particularly of nature or human activity – could enhance the experience of both the driver and the passengers. The literature has shown that being exposed to pleasant soundscapes has the potential to aid recovery from stress and is associated with lower heart rate than being exposed to unpleasant soundscapes. Therefore, increasing pleasantness of the sound environment in the car could lead to improved well-being. This paper reports on an immersive audio-visual listening experiment investigating how listeners perceive journeys augmented with realistic soundscapes. To increase realism and ecological validity, the experiment uses spatial audio, 360-degree videos presented through a virtual reality headset, and a car seat with vibrations corresponding to the presented drive. 1 z.m.podwinska@salford.ac.uk a slaty. inter.noise 21-24 AUGUST SCOTTISH EVENT CAMPUS O ¥, ? GLASGOW 1. INTRODUCTION Reducing unwanted sound has long been a focus of engineering activity in the automotive industry, typically referred to as NVH (noise, vibration, and harshness). It is now possible to achieve an acoustic environment inside the vehicle cabin that e ff ectively excludes most external sound through purely passive means. It is usually desirable to exclude sounds such as tyre, wind, and engine noise. Other sounds that are prevented from reaching the vehicle occupants may be beneficial to preserve, either to improve the journey experience, or for other reasons such as enhanced situational awareness. The study described here investigates the first of these use-cases, exploring the use of sound to enhance passenger well-being. Pleasantness is one of the main dimensions on which soundscapes can be described. There is evidence that soundscape pleasantness is related to physiological responses in the body. Being exposed to more unpleasant soundscapes was found to be correlated with heightened heart rate and lowered respiration rate [1]. Also, pleasant soundscapes show a decrease in skin conductance levels after a stress task compared to unpleasant soundscapes, which suggests better recovery from stress [2]. This indicates that there is a well-being benefit to being exposed to pleasant as opposed to unpleasant soundscapes. In this paper, we evaluate if bringing external sounds into the car can increase pleasantness of car journeys. We also assess the eventfulness of those enhanced sound environments, as the second important dimension of soundscapes [3] and whether participants assess them as appropriate in the context of being a passenger in a car. 2. METHODS 2.1. Recordings and reproduction To simulate the experience of being a passenger in a car, 360-degree video and spatial audio recordings were made inside a test vehicle, while driving through various locations. At the same time, sound was also recorded outside of the car, which was later used to enhance the journeys. Video recordings were made using the Insta360 Pro 2 camera placed on the front passenger seat of the car (see Figure 1a). A SoundField microphone recording 1 st order Ambisonics audio was placed directly below the camera. To record exterior sounds, 2 cardioid microphones were mounted in the centre of the roof of the car. Two calibrated B&K measurement microphones were mounted in the cabin and on the roof, to capture the sound levels for reproduction in the laboratory. Additionally, two accelerometers were mounted on the rails of the driver’s seat to record vibrations during the drive. In the laboratory, participants were seated in a car seat placed in an acoustically-treated room designed for multi-channel spatial audio reproduction (see Figure 1b). The interior sound was directly decoded from B-format recordings to the 16-loudspeaker setup around the listener. The exterior sound was first encoded to left and right position in 1 st order Ambisonics, and then decoded to the same loudspeaker setup. The sound level of the reproduced sound was set to be the same as the actual sound level captured by the measurement microphones during recording. Average LAeq reproduction level between the chosen scenes was 52.7 dB, with the quietest scene being 43.5 dB and the loudest 61.4 dB. The 360-degree videos were stitched from the individual camera lens recordings with the Insta360 Stitcher software and edited with Adobe Premiere Pro. They were then played over an Oculus Quest 2 virtual reality headset. To make the experience even more immersive, vibrations recorded in the car were reproduced with a ButtKicker shaker mounted on an aluminium frame underneath the seat. The frame was custom made to elevate the car seat, so that the average listener ear height was approximately in line with the tweeter of the central loudspeaker array. (a) (b) Figure 1: Recording and reproduction setup: a) 360-degree camera and Ambisonics microphone set up at the passenger seat to capture the audio-visual scenes; b) a car seat set up in an acoustically treated room with a spatial audio loudspeaker setup, where the scenes were reproduced for the experiment. 2.2. Experimental procedure The experiment was a full-factorial within-subject design with two independent variables: sound condition (baseline / enhanced) and scene (6 di ff erent audio-visual scenes). Each scene was presented twice: with just the interior sound (baseline sound condition) and with a mixture of interior and exterior sound (enhanced sound condition), and the presentation order was randomised. The scenes used in the experiment were 30 seconds long and chosen to cover a range of soundscapes, including natural and urban ones. In their enhanced versions, some of the scenes included sounds expected to be pleasant (e.g. birdsong), and some unpleasant (e.g. tra ffi c). Table 1 lists all audio-visual scenes used in the experiment and their brief descriptions. Participants were not told before the experiment the nature of the audio di ff erences that they would experience, or that they would be viewing the same video content with di ff erent sound. After experiencing each scene, the participants were asked to take the headset o ff and respond to three questions through a questionnaire displayed on a screen in front of them. The first question was to describe the environment on the pleasantness / eventfulness scale [3] by marking a point on a graphical user interface representing 2D axes, with pleasantness along the x-axis, and eventfulness on the y-axis (see Figure 2). The main dimension labels were emphasised. Corner labels according to the soundscape literature were also included to help participants navigate the space, but they were not specifically instructed to consider these terms or rate the scenes according to them. A similar user interface was previously used for continuous real-time evaluation of soundscapes [4] and was found to give results consistent with the results derived from the semantic scales described in the ISO standard on soundscapes [5]. The second question was an overall preference question: “Overall, how much did you like the sound environment in this video?” and a slider ranging from “Disliked very much” to “Liked very much”. Participants were also asked to explain their rating in a free-text box. Finally, the participants were asked “To what extent was the sound environment appropriate to the overall experience?” and responded on a slider ranging from “Not at all” to “Perfectly”. Again, they were then asked to provide justification for their rating in a free-text field. The text fields were obligatory, but participants were instructed that they could write as much or as little as they wanted. Each participant was presented with 12 trials and the main part of the experiment lasted approximately 30 minutes. Before they started the experiment, all participants completed a short practice session (two scenes) to familiarise themselves with the procedure. Scene Location Description 1 Manchester, Lower Byrom Street Quiet urban street with a mixture of human sounds, construction sounds and bird song. 2 Manchester, Deansgate Busy city street with heavy tra ffi c, car stationary, stuck in tra ffi c. 3 Manchester, St Mary’s Gate A narrow city centre street, with a lot of pedestrians and a busker playing. 4 Manchester, Chorlton Ees Nature Reserve Natural environment, birds chirping. 5 Betws-y-Coed, Wales A slow drive through a bridge, with a waterfall to the right and people walking past. 6 River rapids in Snowdonia National Park, Wales A road with visible and audible river rapids on one side, and trees on the other side. Table 1: Scenes used in the experiment. Figure 2: Pleasantness / eventfulness rating graph. Black dot shows an example response. Please rate the sound environment of the scene you have just watched in terms of its pleasantness and eventfulness: neste Event ‘verontonetiog Unpleasant ‘annoying 3. RESULTS 31 participants took part in the experiment, 24 male and 7 female. 21 participants were between 18- 24 years old, 7 between 25-34 years old, and 3 between 35-44 years old. Out of the total of 372 data points (31 participants x 12 conditions), 3 data points were missing from the results due to technical issues with the virtual reality headset. Because the missing data points were related to tracking errors in the equipment, it is safe to assume that they were not correlated with any experimental variables. We investigated di ff erences in ratings of overall preference, appropriateness, pleasantness and eventfulness for baseline and enhanced conditions, for each scene. Figure 3 shows Pearson’s correlation coe ffi cients between the 4 dependent variables. Preference shows high correlation with pleasantness and appropriateness. Eventfulness was generally not correlated with the other variables. Figure 3: Correlation between the four measured response variables, a) for the baseline scenes, b) for the enhanced scenes. Data collected with the sliders and on the graphical pleasantness / eventfulness interface were not normally distributed. This is to be expected of visual-analog-scale-type data, because the end points are clearly limited at 0 and 1. Therefore, a linear regression might not be a good fit to the data. Recently, new approaches to modelling such data have been proposed, such as the zero one inflated beta (ZOIB) regression [6, 7] or the ordered beta regression [8]. Both of these approaches propose to model data obtained from visual analog scales with a combination of a beta distribution (which is defined on the (0,1) interval), and a probability of obtaining values at 0 and 1. Here, we will analyse the data with the ZOIB regression, using the brms package in R [9]. All models were fitted as mixed-e ff ects models, with sound condition (baseline / enhanced), scene, and the interaction of the two as fixed e ff ects, and participant as a random e ff ect, to allow for varying intercepts for di ff erent participants. Note that although the ZOIB regression allows for modelling four di ff erent parameters of the distribution, for simplicity, in the following analysis we only investigate the predictor variables’ e ff ects on the mean. Table 2 shows the four fitted models. Post-hoc comparison tests were conducted for all four response variables, to investigate the e ff ect of the enhanced sound condition, compared to the baseline condition, for each of the 6 scenes. The summary of these tests is shown in Table 3. Contrast comparisons performed for pleasantness show that the enhanced condition was rated as about half as pleasant as the baseline condition for scenes 2 (Est: -0.71, 95% CI: [-1.28, -0.15], OR: 0.49) and 3 (Est: -0.67, 95% CI: [-1.22, -0.15], OR: 0.51). In addition, the enhanced condition was Relationship between measured variables a a baseline Appropriateness -| 0.39 | OAS Preference - Eventfulness -| O46 @ & & S Se“ Se ar a Pe x g X = non-significant at p < 0.05 (Adjustment: Holm) sample sizes: n= 183 correlation: Pearson 1.0 a.. 0.0 a 0.5 -1.0 enhanced Appropriateness - On | Preference - Og Eventfulness-| 083 Se x @ iS SS & @ Ss x oF S @ Rid & g g X = non-significant at p < 0.05 (Adjustment: Holm) sample sizes: n= 186 correlation: Pearson 1.0 a. 0.0 a 0.5 1.0 Pleasantness Eventfulness Preference Appropriateness Intercept 0.367 -0.903 0.354 0.406 [-0.073, 0.758] [-1.340, -0.522] [-0.008, 0.681] [0.028, 0.798] Sound-Enhanced -0.135 1.052 -0.059 0.107 [-0.676, 0.398] [0.577, 1.537] [-0.486, 0.373] [-0.349, 0.583] Scene2 -0.054 0.025 -0.021 0.208 [-0.592, 0.524] [-0.445, 0.519] [-0.473, 0.404] [-0.268, 0.695] Scene3 0.289 0.722 0.143 0.253 [-0.222, 0.847] [0.240, 1.201] [-0.298, 0.593] [-0.217, 0.722] Scene4 -0.152 0.347 -0.364 0.075 [-0.691, 0.407] [-0.125, 0.845] [-0.807, 0.087] [-0.457, 0.547] Scene5 -0.208 0.164 -0.394 -0.268 [-0.765, 0.316] [-0.314, 0.660] [-0.799, 0.058] [-0.717, 0.209] Scene6 0.056 0.319 -0.058 0.144 [-0.479, 0.605] [-0.154, 0.808] [-0.490, 0.393] [-0.309, 0.646] Sound-Enhanced × Scene2 -0.576 0.107 -0.418 0.088 [-1.304, 0.208] [-0.536, 0.814] [-1.030, 0.181] [-0.591, 0.772] Sound-Enhanced × Scene3 -0.537 0.025 -0.257 -0.240 [-1.263, 0.226] [-0.688, 0.657] [-0.893, 0.330] [-0.900, 0.392] Sound-Enhanced × Scene4 0.749 -0.756 0.814 0.182 [-0.011, 1.515] [-1.426, -0.063] [0.207, 1.448] [-0.499, 0.872] Sound-Enhanced × Scene5 0.423 -0.533 0.687 0.594 [-0.332, 1.236] [-1.200, 0.120] [0.080, 1.318] [-0.059, 1.278] Sound-Enhanced × Scene6 -0.056 -0.576 0.169 -0.156 [-0.801, 0.678] [-1.226, 0.092] [-0.477, 0.776] [-0.781, 0.543] Participant (random e ff ect) 0.409 0.556 0.369 0.508 [0.249, 0.601] [0.382, 0.758] [0.227, 0.520] [0.348, 0.705] phi 2.318 3.487 4.668 4.073 [2.023, 2.639] [3.009, 3.986] [4.029, 5.379] [3.483, 4.731] zoi 0.003 0.003 0.054 0.097 [0.000, 0.010] [0.000, 0.009] [0.034, 0.079] [0.069, 0.129] coi 0.498 0.494 0.715 0.810 [0.027, 0.978] [0.021, 0.975] [0.506, 0.880] [0.670, 0.916] Num.Obs. 369 369 369 369 WAIC -52.4 -134.0 57.8 128.2 RMSE 0.26 0.21 0.21 0.23 Table 2: Summary of ZOIB regression models for the four response variables. Regression equation was: RespVariable ∼ SoundCondition + Scene + SoundCondition:Scene + (1 | Participant). Table shows regression parameter estimates and their corresponding 95% confidence intervals. Note that all the estimates and confidence intervals are logit-transformed (not on response scale). Phi, zoi and coi are additional model parameters which were not varied by sound condition or scene. 1.9 times more pleasant than the baseline condition for scene 4 (Est: 0.62, 95% CI: [0.06, 1.19], OR: 1.85). Figure 4 A shows estimated means for the di ff erent conditions. Post-hoc tests for eventfulness show that the enhanced condition was significantly more eventful for all but one scene. The di ff erence was especially prominent for scene 1 (Est: 1.05, 95% CI [0.58, 1.54], OR: 2.86), scene 2 (Est: 1.16, 95% CI [0.68, 1.63], OR: 3.19) and scene 3 (Est: 1.08, 95% CI [0.58, 1.54], OR: 2.94), and smaller for scene 5 (Est: 0.518, 95% CI [0.055, 1.006], OR: 1.68) and Contrast Scene Pleasantness Eventfulness Preference Appropriateness enhanced - baseline 1 (Lower Byrom St) -0.14 1.05 -0.06 0.11 [-0.68, 0.40] [0.58, 1.54] [-0.49, 0.37] [-0.35, 0.58] enhanced - baseline 2 (Deansgate) -0.71 1.16 -0.47 0.19 [-1.28, -0.15] [0.68, 1.63] [-0.90, -0.05] [-0.32, 0.66] enhanced - baseline 3 (St Marys Gate) -0.67 1.08 -0.32 -0.13 [-1.22, -0.15] [0.58, 1.54] [-0.75, 0.10] [-0.61, 0.33] enhanced - baseline 4 (Ees Nature Reserve) 0.62 0.30 0.75 0.29 [0.06, 1.19] [-0.21, 0.74] [0.32, 1.19] [-0.23, 0.79] enhanced - baseline 5 (Betws-y-Coed) 0.29 0.52 0.63 0.71 [-0.26, 0.82] [0.06, 1.01] [0.19, 1.06] [0.19, 1.15] enhanced - baseline 6 (river rapids) -0.19 0.48 0.12 -0.05 [-0.68, 0.36] [0.04, 0.96] [-0.40, 0.54] [-0.50, 0.47] Table 3: Post-hoc contrast analysis for all four response variables – the e ff ect of sound condition for di ff erent scenes. The table shows e ff ect estimates and their corresponding 95% confidence intervals, given on the log (not the response) scale. Statistically significant results are shown in bold. scene 6 (Est: 0.480, 95% CI [0.038, 0.955], OR: 1.62). Figure 4 B shows the estimated means. Figure 4: Interaction between the di ff erent scenes and sound conditions for pleasantness (A) and eventfulness (B). Plots show the estimated marginal means and their 95% confidence intervals. Statistically significant results are marked with asterisks. Post-hoc tests for preference show that the only scene for which the enhanced sound condition was significantly less liked than the baseline was scene 2 (Est: -0.47, 95% CI [-0.90, -0.05], OR: 0.6). On the other hand, the enhanced condition was approximately twice as liked as the baseline condition for scenes 4 (Est: 0.75, 95% CI [0.32, 1.19]) and 5 (Est: 0.63, 95% CI [0.19, 1.06]). The estimated means for preference are plotted in Figure 5 A. Finally, only one scene showed significant di ff erence in appropriateness between the enhanced and baseline conditions. For scene 5, the enhanced condition was rated as 2 times as appropriate as the baseline condition (Est: 0.71, 95% CI [0.19, 1.15]). The estimated means are shown in Figure 5 B. Pleasantness 0.25 0.00 B 1.00 0.75 0.50 Eventfulness 0.25 SoundCondition —* baseline —*> enhanced Figure 5: Interaction between the di ff erent scenes and sound conditions for preference (A) and appropriateness (B). Plots show the estimated marginal means and their 95% confidence intervals. Statistically significant results are marked with asterisks. 4. DISCUSSION The e ff ects of sound enhancement on the response variables, in particular pleasantness and preference, varied between the di ff erent scenes. This is perhaps expected, as they represented a range of urban, town and natural soundscapes. The first 3 scenes represented di ff erent urban scenes and soundscapes. They were all rated as significantly more eventful in their enhanced sound condition, but varied in how pleasant and liked they were. Scene 1 , recorded at Lower Byrom Street, was a relatively quiet urban street, with a mixture of human, natural and industrial sounds. The enhanced sound condition was rated as more eventful than the baseline, but not more or less pleasant or preferred. Although participants described it as a realistic sounding urban environment, the mixture of natural, human and industrial sounds did not add up to a more pleasant experience. For example, one participant noted: “I liked the birds chirping sound, but a there was also a banging sound in the surrounding which I didn’t like”. Scene 2 , was a recording of the car stationary in relatively heavy tra ffi c, on a busy road in Manchester (Deansgate). In its enhanced sound condition, it was rated as less preferred and less pleasant than the baseline condition. This is perhaps not surprising, as the sounds captured outside of the vehicle were primarily those of tra ffi c, and among the most commonly used words to describe the enhanced condition in the free-text fields were “heavy”, “tra ffi c”, “honking” and “loud”. We would not expect these types of sounds to be pleasant [10,11], and this is perhaps a situation where bringing sounds directly from outside in would not be beneficial. Scene 3 was a slow drive through a busy city centre, with sounds of people and music dominating the soundscape. In this environment, the enhanced version was also less liked than the baseline, but no significant e ff ects of pleasantness were found. This situation is perhaps less obvious, as sounds of tra ffi c did not dominate here, and one could in principle imagine it being desirable and engaging to hear the sounds of the city as you drive through it. Among the sounds most often used to describe the enhanced version were “street”, “busy”, “loud”. Responses to the enhanced version of this scene were mixed, with some participants appreciating the lively city environment (“seemed quite bustling and busy, yet maintained a overall jovial and light atmosphere due to music, which gave almost carnival vibes”), while others commented on the “annoying city / car noises”. It has also been shown that A 1.00 0.75 0.50 Preference 0.25 0.00 ive] 8 iateness Sg 3 0.50 Appropri 0.00 SoundCondition —* baseline —*> enhanced human sounds tend elicit a range of di ff erent emotional responses [12], so human sounds dominating the soundscape could contribute to the mixed ratings. The following three scenes represented natural or small town scenes and soundscapes. Scene 4 was the most natural one, recorded at the Ees Nature Reserve in Chorlton, Manchester. It was a very slow and quiet drive, with primarily the sounds of birdsong and the car driving over gravel added in the enhanced version. This was the only scene for which the enhanced condition was not rated as more eventful than the baseline condition. It was, however, significantly more pleasant and preferred. This is consistent with soundscapes with natural sounds being generally perceived as pleasant [1,2]. It confirms that adding natural sounds such as birdsong to a car journey has the potential to increase the pleasantness of the sound environment. Participants also positively commented on the bird song in the free-text fields. For example, one person said they “could hear birds which made the sound experience more enjoyable”. Scene 5 was a slow drive over a bridge with audible flowing water underneath. Among the most common words used to describe the enhanced version were “waterfall” and “nature”. Other than being more eventful, the enhanced version was also more preferred than the baseline version. Again, it is consistent with the soundscape literature that sounds of water are perceived as pleasant (e.g. [13]), so adding water sounds to a car journey could increase pleasantness of the sound environment. Interestingly, it was also the only scene rated as significantly more appropriate in the enhanced version, compared to the baseline. This could be due to the flowing water being visible (and therefore expected), but only audible in the enhanced condition. Participants commented that they “felt engaged and part of the surrounding environment” and that the enhanced version “captured the external environment a lot better as you could hear the people quite distinctively”. Finally, scene 6 was a drive through a road with trees and flowing water, although not as prominent as in scene 5. No words related to water or nature were among the 10 most used words to describe the enhanced sound condition, suggesting that they did not play as big a role in how the soundscape was perceived, compared to scene 5. In fact, while the baseline version was described by words such as “realistic”, “river” and “birds”, the enhanced version used words like “passing”, “loud” and “engine”, perhaps being dominated by other cars passing the vehicle. It was rated as more eventful, but not more or less pleasant or liked. 5. CONCLUSIONS Overall, it was concluded from this experiment that adding external soundscapes with natural sounds, such as birdsong and flowing water, increased the perceived pleasantness of the sound environment during a car journey. Those sound environments were also generally preferred to the baseline interior sound (the current un-enhanced sound inside the vehicle). On the other hand, adding urban soundscapes dominated by tra ffi c sounds made the sound environment less preferred and less pleasant than the baseline interior sound in the car. Additionally, it was found that adding external sound to a car journey consistently increased the perceived eventfulness of the sound environment, except in the case of the quietest soundscape with little exterior sonic activity. These results support the hypothesis that is possible to increase both the perceived pleasantness and eventfulness of the sound environment during a road vehicle journey by augmenting it with the right types of sounds. REFERENCES [1] Ken Hume and Mujthaba Ahtamad. Physiological responses to and subjective estimates of soundscape elements. Applied Acoustics , 74(2):275–281, 2013. [2] Oleg Medvedev, Daniel Shepherd, and Michael J. Hautus. The restorative potential of soundscapes: A physiological investigation. Applied Acoustics , 96:20–26, 2015. [3] Östen Axelsson, Mats E. Nilsson, and Birgitta Berglund. A principal components model of soundscape perception. The Journal of the Acoustical Society of America , 128(5):2836–2846, 2010. [4] Simone Graetzer, Aleksandra Landowska, Lara Harris, Trevor J Cox, and William J Davies. Continuous evaluative and pupil dilation response to soundscapes. In Proceedings of Forum Acusticum 2020 . European Acoustics Association (EAA), 2020. [5] PD ISO / TS 12913-2:2018 Acoustics – soundscape. Data collection and reporting requirements. Standard, British Standards Institution, 2018. [6] Raydonal Ospina and Silvia L. P. Ferrari. Inflated beta distributions. Statistical Papers , 51(1):111, 2008. [7] Matti Vuorre. Sometimes I R: How to analyze visual analog (slider) scale data?, 2019. [8] Robert Kubinec. Ordered beta regression: A parsimonious, well-fitting model for continuous data with lower and upper bounds. Political Analysis , Forthcoming. [9] Paul-Christian Bürkner. brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software , 80(1):1–28, 2017. [10] Kristian Jambro˘si´c, Marko Horvat, and Hrvoje Domitrovi´c. Assessment of urban soundscapes with the focus on an architectural installation with musical features. The Journal of the Acoustical Society of America , 134(1):869–879, 2013. [11] Joo Young Hong and Jin Yong Jeon. Influence of urban contexts on soundscape perceptions: A structural equation modeling approach. Landscape and Urban Planning , 141:78–87, 2015. [12] Margret Engel, Maria Carvalho, Janina Fels, and William Davies. Verification of emotional taxonomies on soundscape perception responses. 09 2021. [13] Giovanni Brambilla, Luigi Ma ff ei, Maria Di Gabriele, and Veronica Gallo. Merging physical parameters and laboratory subjective ratings for the soundscape assessment of urban squares. The Journal of the Acoustical Society of America , 134(1):782–790, 2013. Previous Paper 431 of 769 Next