Welcome to the new IOA website! Please reset your password to access your account.

Proceedings of the Institute of Acoustics

 

 

The effect of loudness on spatial knowledge acquisition in a virtual outpatient polyclinic

 

Donya Dalirnaghadeh1, Bilkent University, Ankara, Turkey

Semiha Yilmazer2, Purdue University & Bilkent University, West Lafayette, USA / Ankara, Turkey

 

ABSTRACT

 

This study aims to find out whether changing the loudness level of sound sources creates soundmarks that aid spatial knowledge in a virtual outpatient polyclinic. Furthermore, it aims to examine the effect of loudness on perceptual attributes of the sound environment. In reference to the crossmodal correspondence of brightness and loudness and the positive perceptions associated with brightness, we explore whether the loudness of a sound source alters the perception of the sound environment. In this study, a virtual simulated outpatient polyclinic has been created with varying additions of a sound with different loudness levels. Twenty-four participants were assigned to one of the three groups: a control group (no change in the sound environment of the polyclinic), a normalized loudness group (addition of an announcement and alarm sound with normalized loudness level to the background), increased loudness (announcement and alarm sound were 3dB louder than the background). The results showed that group 3 was rated as more contented, less annoying, and more energetic and stimulated than the other groups. Additionally, there was a trend towards group 3 performing better than the other groups in spatial knowledge tasks.

 

1. INTRODUCTION

 

Hearing is more pronounced in healthcare unitsbecause of a lack of differentiation in repetitive zones and visual cues2,3; thus, the sound environment is vital in healthcare units. Similar to sight, hearing sounds is a long-distance sense that serves to perceive and understand environmental cues that help people orient themselves in a spatial context4,5. The sound environment is among the factors that define building spaces6, but it has rarely been studied regarding its role in spatial knowledge tasks.

 

The traditional wayfinding system relies on environmental cues such as landmarks and signage (spatial cues such as arrows, color coding, and directional texts)7,8. However, this system can be confusing because of the hospitals' complex layouts and the overwhelming number of signs9. Thus, it is essential to look into alternative and cost-efficient methods that help spatial knowledge acquisition.

 

Recent studies suggest that sound has a leading effect on visual elements' noticeability in a way that variations in sound level correspond with changes in visual attention10. Attention is one factor that affects spatial learning 11; thus, adding a sound that attracts users' attention may lead to better spatial knowledge. However, limited studies have looked at the effect of sound on spatial knowledge in hospitals. Thus, in this preliminary study, we aimed to explore whether changing specific characteristics of sound sources in the sound environment would create a more positive soundscape while enhancing spatial knowledge acquisition. This study aims to provide grounds for using the sound environment as a design element to promote spatial knowledge by analyzing the physical and perceptual characteristics of the sound.

 

2. METHODS

 

2.1. Virtual environment

 

Recent studies have shown that virtual systems with lower immersion levels, such as desktops, produce comparable results with higher immersion systems in spatial knowledge tasks11. Hence, we simulated the outpatient polyclinic of Bilkent Integrated Health Campus in Ankara to create a virtual environment. This outpatient polyclinic has a large area and complex layout that makes it a suitable choice for study.

 

We used Chief Architect Premier X11 to create a 3D simulation of the space. The scenes were rendered in real-time at a speed of 20 frames per second12. A video of the specified route (recorded in the real environment) was created using the Walkthrough path tool for passive exploration. Similar to previous virtual environments, the route was shown with a plain ceiling with sufficient contrast between the floor and the walls. No light sources were used to avoid directional cues from shadows13. The route was made of uniform and undistinguishable paths and neutral-colored walls, so the walls did not provide wayfinding cues14. Figure 1 presents the schematic plan and the traveled route.

 

2.2. Participants

 

Twenty-four students from Bilkent University, Turkey, participated in this study. All the participants were familiar with the polyclinic. To prime the participants, we asked them to imagine they were simulated visitors of the outpatient polyclinic in Bilkent Integrated Health Campus. The participants were randomly divided into three experimental groups that varied in the addition of a sound with different loudness levels, with eight people (five women and three men) in each group.

  • Group 1 (control group): No change in the sound environment of the polyclinic.

  • Group 2 (normalized loudness): The sound environment was augmented with an announcement sound that was normalized with the loudness level of the original sound environment.

  • Group 3 (increased loudness): The sound environment was augmented with an announcement sound that is 3 dB louder than the loudness level of the original sound environment.

 

2.2. Experimental Stimuli

 

To decide which features of the sound environment to augment, data was gathered from the participants in group 1 first. A listening test was conducted by asking the participants to listen to the sound environment of the outpatient polyclinic with headphones, with no visuals. After listening to the sound recording, the participants were asked to answer open-ended questions about the most dominating sounds they heard. They were also asked to indicate their expectations and preferences and which sounds identify an outpatient polyclinic. The intention was to determine whether any soundmarks or sound sources were clearly heard in the sound environment and change the loudness level of those sound sources. The results showed that the participants in group 1 rated people's voices and footsteps as the most dominating sounds (See figure 2).

 

 

Figure 1: The schematic plan and the traveled route.

 

Since the loudness level of these sound sources could not be changed, the participants' sound expectations in an outpatient polyclinic were considered. More than half of the participants expected to hear announcement sounds. The other expected sound sources were people's voices (patients, doctors, and nurses),silence, and equipment sounds. Thus, we recorded a typical announcement about wearing masks and social distancing widely heard in public spaces such as hospitals. We used a female speaker from the Natural Reader Text to Speech extension to record the announcement in Turkish. Adobe Audition software was used to create the audio for group 2 and group 3. For group 2 (normalized loudness), the loudness level of the announcement with the existing sound environment of the outpatient polyclinic was matched and then added to the original sound environment. For group 3, the loudness level of the announcement was increased by 3 dB and then added to the original sound environment. Acoustic loudness was measured with MATLAB. In group 1, the loudness was 13.68 sones; in group 2, it was 15 sones; in group 3, it was 24.07 sones.

 

The sound stimuli for groups 2 and 3 were added to the video created with the Walkthrough path tool with Cyberlink PowerDirector editing software. Clapping was used to synchronize the video and sound information. The models were animated with a wide-angle lens following the route to provide a 65-degree field of view and a more immersive virtual environment. The simulated eye height was set to 1.60 meters from the floor, and walking speed was a constant 1.1 m/s15-17. The video duration was 220 seconds (including stops before the intersections). The route was identical for the different conditions, with a length of 154 meters and eight direction changes (three times left, five times right). We used A 17-inch Asus personal computer (2.59 GHz, 16 Gb RAM with an nVidia GeForce GTX960) as an apparatus to provide visual information. The laptop was placed on a desk, and the participants sat in a chair approximately 50 cm from the screen. Each participant undertook the test individually and without interruption in an experiment room with closed doors and blocked windows. Sound signals were delivered by computer through headphones (ROG Strix Fusion 300 7.1).

 

2.3. Procedure

 

Before the experiment, the participants' hearing was tested with the Widex online hearing test. All the participants had normal hearing. Afterward, the participants were asked to fill in demographic information about themselves and their familiarity with Bilkent Integrated health campus outpatient polyclinic. The number of visits and the last time the space was visited was also asked to see if the degree of familiarity differed across groups and whether it impacted spatial knowledge tasks. After this section, the participants were asked to listen to the sound recording of their assigned experiment group and answer questions about it. We asked them to listen to the sound recording carefully with headphones (no visual information was provided). As discussed earlier, the participants were asked to identify dominant sound sources based on the sound stimuli they had listened to. Sound expectation, preference, and soundmark identification were also asked. These questions were answered based on the previous experiences of the participants.

 

The second part of the listening task involved answering 5-point Likert scales to evaluate the sound environment (1-very bad, 5-very good), its appropriateness with an outpatient polyclinic (1-not at all, 5- perfect), and the perceived loudness of the sound environment (1-very quiet, 5- very loud). Additionally, we adopted the Mehrabian–Russell model that uses the pleasure, arousal, and dominance scale (PAD scale) to rate the perception of the sound environment18. In addition to the adjective pairs in the M-R model, we added unpleasant-pleasant, gloomy-fun, and noisy-quiet adjective pairs that have been used in previous soundscape studies19-21. After examining questionnaire results, five adjective pairs, sleepy– wide awake, sluggish-wild, dominant-submissive, in control-cared for, autonomous-guided, influential-influenced, were eliminated. Many participants did not clearly understand these adjective pairs and had difficulties relating them to the sound environment. Thus, the five pairs were removed from the data set during analysis to avoid any potential bias.

 

After the listening task, the participants were asked to watch the prepared videos based on their experiment group. The participants were not informed about what tasks they would do after the video to control possible biases in the responses and attention. The video started from the outpatient polyclinic entrance, traveled across the patient admission desks and elevators, and finally arrived at its destination, the neurology department. The space plan was not available to the participants during the learning phase.

 

2.3. Performance tasks

 

After watching the video, all groups were asked to do four spatial memory tasks using the Landmark-Route-Survey model representation22. A landmark placement task measured landmark knowledge (task 1). A scene sorting task (task 2) was used to measure route knowledge, and a sketch mapping (task 3) and a pointing task (task 4) were used to measure survey knowledge. After finishing the tasks, the participants filled in the Santa Barbara Sense-of-Direction scale questionnaire. This scale is a self-report measurement of spatial abilities comprising 15 questions23.

 

2.4. Data Analysis

Statistical Package for the Social Sciences (SPSS 25.0, IBM, USA) was used to analyze the data.

 

All tasks showed good internal reliability (Cronbach's α from 0.70 to 0.88). Leven's test in all tasks indicated homogeneity of variance; thus, we used parametric tests to analyze the data. We used a one way ANOVA to analyze the data between the groups in all the tasks except the sketch mapping task. We used a Scheffe Test as a post-hoc test to make pairwise comparisons between the groups. A chi-square test was used in the sketch mapping task to make pairwise comparisons between the groups because the data was nominal.

 

3. Results

 

3.1. Listening test results across the groups

 

The results of the first section of the listening task were compared across the three experiment groups. In terms of the most dominating sound, human voices and footsteps were the most dominating sounds. A few participants also mentioned background noise. In group 2 (normalized loudness) and group 3 (increased loudness), announcement sounds were the most dominating sounds, followed by human voices and footsteps. Sounds of technology such as equipment and elevator sounds were mentioned by the participants in groups 2 and 3. It should also be mentioned that the number of participants in groups 2 and 3 who identified background noise as the most dominating sound was lower than group 1. An interesting finding is that adding announcement sounds with normalized loudness and increased loudness resulted in identifying more sound sources. For example, the participants in group 1 mostly just identified human voices and footsteps as the dominant sounds. However, the participants in groups 2 and 3 identified more than three sound sources. The addition of the announcements may have attracted the participants' attention to other sound sources available in the sound environment. Figure 2 presents the bar graph of the dominant sounds across different groups.

 

The mean score of each semantic pair was calculated to analyze whether there was any difference between perceptual attributes of the sound environment among the groups. Figure 3 and figure 4 present the bar graph and radar graph of the semantic differential scale. As seen in the radar graph, there are overlaps among the majority of the adjective pairs. Although the mean scores indicate that all three groups rated the sound environment as negative, there are slight differences in melancholic-contented, annoyed-pleased, dull-energetic, and uninteresting-stimulated adjective pairs. Group 3 was rated as more contented, less annoying, and more energetic and stimulated than the other groups.

 

 

Figure 2: Bar graph of the most dominating sound sources across the groups

 

 

Figure 3: Bar graph of the mean scores of perceptual attributes of the sound environment

 

 

Figure 4: Radar graph of mean scores of perceptual attributes of the sound environment

 

3.2. Spatial knowledge performances in each task

 

The Santa Barbara Sense of Direction questionnaire results indicated no differences between the self-report spatial abilities of the participants; F (2,21) = 1.649, p = 0.216, ƞ2 = 0.136. Thus, any observed differences between the performance tasks can be related to the effect of the experiment group.

 

Task 1 (Landmark placement on a sketch) analysis: In this task, the participants were asked to place the escalator, the staircases, the elevators, and the patient administration desks on a blank plan as accurately as possible. The answers were scanned and uploaded to Gardony Map Drawing Analyzer. The canonical organization's square root was compared between the groups for scoring purposes. The results indicated a significant difference between the subjects’ performance; F (2,21) = 5.141, p = 0.015, ƞ2 = 0.329. Scheffe Post Hoc Test was applied to compare performance in a pairwise fashion. There was a significant difference between group 1 and group 3, p= 0.015; however, there was no significant difference between group 1 and group 2, p=0.266, and between group 2 and 3, p=0.332. The participant in group 3 scored higher (mean score=0.870) than group 2 (mean score=0.647) and group 1 (mean score=0.401). See Figure 5 for the representation of the data analysis between the three experimental groups in task 1.

 

 

Figure 5: Mean scores in the landmark placement across the three experimental groups. Each panel displays performance for the control, normalized loudness, increased loudness group conditions. Asterisks indicate significant differences at p < .05

 

Task 2 (Scene sorting task) analysis: In this task, the participants were presented with eight pictures taken along the route and asked them to sort them chronologically. Comparisons of percentages of correctly ordered pictures indicated a significant effect of the experiment group on performance; F (2,21) = 9.810, p = 0.001, ƞ2 = 0.483. Scheffe post hoc test indicated a difference between group 1 and group 3 (p=0.001); however, there was no significant difference between group 1 and group 2 (p=0.098) and group 2 and 3 (p=0.124). The bar graph shows that participants in group 3 (mean score=88.5) performed better than group 2 (mean score=62.5) and group 1 (mean score =35.938). Figure 6 represents the data distribution in task 2 across the groups.

 

Task 3 (Sketch-mapping) analysis: In this task, the participants were presented with the plan showing the architectural elements and were asked to draw the route they had watched on the video, similar to previous studies24. A pass or fail method was used to analyze the data22. Results of a Chi square test showed no significant difference between the groups, X2(2) = 1.371, p=0.504. The percentages of the correct answers within each group were compared. 28.6% in group 1, 42.9% in group 2, and 28.6% in group 3 answered the second test correctly. Table 2 presents the percentages of the correct and the wrong answers within task 2 and the groups.

 

Table 2: Number and percentages of correct and wrong answers in sketch-mapping (Task 3) across the groups

 

 

 

Figure 6: Mean scores in the scene sorting task (Task 2) across the three experimental groups. Each panel displays performance for the control, normalized loudness, and increased loudness group conditions. Asterisks indicate significant differences at p < .05

 

Task 4 (Pointing task) analysis: In this task, the participants were asked to imagine standing at a given landmark, facing another, and pointing to a third similar to previous studies25. For scoring purposes, the average deviation between the pointed direction and the correct direction in all four questions was compared. The results indicated no significant effect of experiment group on performance; F (2,21) = 7.68, p = 0.477, ƞ2 = 0.068. Although there is no significant difference between the groups, the average deviation from the correct direction is the lowest for group 3 with 41.09 degrees, followed by group 2 with 63.44 degrees. Group 1 had the worst performance with a 71.09-degree deviation.

 

Overall, except for the sketch mapping task (task 3), the results indicated a positive impact of adding the announcement sound with normalized loudness and increased loudness on spatial knowledge acquisition. A significant difference was detected in the landmark placement task and scene sorting task between groups 1 and 3. Although there was no significant difference in the pointing task, comparing the mean values shows improvement in task performance in groups 2 and 3. With a larger sample size, the differences in performance may be more significant.

 

4. CONCLUSIONS

 

Brightness is one of the visual characteristics that has been explored in the wayfinding studies. Bright corridors have been found to be a more decisive factor of attraction than corridor width26. In another study, the scores for attractiveness and remembrance of color and light showed that warm colors with high brightness levels facilitated memory of space. Furthermore, cool colors with high brightness helped people be oriented in space27. In addition to memorability, high brightness was associated with positive emotions and was preferred more. Based on the association between the loudness of sounds and visual brightness, we expected that adding a sound with increased loudness would improve the perception of the sound environment.

 

The results showed that adding a sound with increased loudness led to slight positive changes in three adjective pairs. Additionally, some participants claimed that hearing the announcement sounds clearer than the background balanced the sound environment and reduced the annoyingness of the hospital sound environment. Before conducting the listening test, the participants were told that they would listen to a sound recording of an outpatient polyclinic. This information may have biased the perception and caused a negative response towards the sound environment. In future studies, providing information about the context may be avoided to avert bias.

 

Regarding spatial knowledge tasks, although significant differences were only found in the landmark placement and scene sorting tasks between groups 1 and 3, the results seem promising. With a larger sample size, the effect of loudness on spatial knowledge tasks and perception of space would be clearer that would provide a foundation for using sound as soundmarks that aid wayfinding.

 

6. REFERENCES

 

  1. Heron, J., Whitaker, D. & McGraw, P. V. Sensory uncertainty governs the extent of audio visual interaction. Vision research 44, 2875-2884 (2004).

  2. Arthur, P. & Passini, R. Wayfinding: people, signs, and architecture. (1992).

  3. Farr, A. C., Kleinschmidt, T., Yarlagadda, P. & Mengersen, K. Wayfinding: A simple concept, a complex process. Transport Reviews 32, 715-743 (2012).

  4. La Malva, F., Verso, V. R. L. & Astolfi, A. Livingscape: a multi-sensory approach to improve the quality of urban spaces. Energy procedia 78, 37-42 (2015).

  5. Secchi, S., Lauria, A. & Cellai, G. Acoustic wayfinding: A method to measure the acoustic contrast of different paving materials for blind people. Applied ergonomics 58, 435-445 (2017).

  6. Setola, N. et al. The impact of the physical environment on intrapartum maternity care: identification of eight crucial building spaces. HERD: Health Environments Research & Design Journal 12, 67-98 (2019).

  7. Morag, I. & Pintelon, L. Digital wayfinding systems in hospitals: A qualitative evaluation based on managerial perceptions and considerations before and after implementation. Appl Ergon, 103260-103260 (2021).

  8. Rodrigues, R., Coelho, R. & Tavares, J. M. R. Users’ perceptions of signage systems at three Portuguese hospitals. HERD: Health Environments Research & Design Journal 13, 36-53 (2020).

  9. Passini, R. Wayfinding in architecture. (1984).

  10. Liu, C., Kang, J. & Xie, H. Effect of sound on visual attention in large railway stations: A case study of St. Pancras railway station in London. Building and Environment 185, 107177 (2020).

  11. Parong, J. et al. The mediating role of presence differs across types of spatial learning in immersive technologies. Computers in Human Behavior 107, 106290 (2020).

  12. Min, Y. H. & Ha, M. Contribution of colour-zoning differentiation to multidimensional spatial knowledge acquisition in symmetrical hospital wards. Indoor and Built Environment, 1420326X20909490 (2020).

  13. Sharma, G. et al. Influence of landmarks on wayfinding and brain connectivity in immersive virtual reality environment. Frontiers in Psychology 8, 1220 (2017).

  14. Lingwood, J., Blades, M., Farran, E. K., Courbois, Y. & Matthews, D. The development of wayfinding abilities in children: learning routes with and without landmarks. Journal of environmental psychology 41, 74-80 (2015).

  15. Haq, S., Hill, G. & Pramanik, A. in Proceedings of the 5th International Space Syntax Symposium. 387405.

  16. Lee, S. & Kline, R. Wayfinding study in virtual environments: The elderly vs. the younger aged groups. ArchNet-IJAR: International Journal of Architectural Research 5, 63 (2011).

  17. North, H. Distance distortion: A comparison of real world and computer animated environments. Journal of Interior Design 28, 26-36 (2002).

  18. Mehrabian, A. & Russell, J. A. An approach to environmental psychology. (the MIT Press, 1974).

  19. Acun, V. & Yilmazer, S. A grounded theory approach to investigate the perceived soundscape of open-plan offices. Applied Acoustics 131, 28-37 (2018).

  20. Hall, D. A., Irwin, A., Edmondson-Jones, M., Phillips, S. & Poxon, J. E. An exploratory evaluation of perceptual, psychoacoustic and acoustical properties of urban soundscapes. Applied Acoustics 74, 248-254 (2013).

  21. Dalirnaghadeh, D. & Yilmazer, S. The effect of sound environment on spatial knowledge acquisition in a virtual outpatient polyclinic. Applied Ergonomics 100, 103672 (2022).

  22. Cogné, M. et al. Are visual cues helpful for virtual spatial navigation and spatial memory in patients with mild cognitive impairment or Alzheimer’s disease? Neuropsychology 32, 385 (2018).

  23. Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K. & Subbiah, I. Development of a self-report measure of environmental spatial ability. Intelligence 30, 425-447 (2002).

  24. Wallet, G. et al. Virtual/real transfer of spatial knowledge: Benefit from visual fidelity provided in a virtual environment and impact of active navigation. Cyberpsychology, Behavior, and Social Networking 14, 417-423 (2011).

  25. Muffato, V., Meneghetti, C., Di Ruocco, V. & De Beni, R. When young and older adults learn a map: The influence of individual visuo-spatial factors. Learning and Individual Differences 53, 114-121 (2017).

  26. Vilar, E., Rebelo, F., Noriega, P., Duarte, E. & Mayhorn, C. B. Effects of competing environmental variables and signage on route-choices in simulated everyday and emergency wayfinding situations. Ergonomics 57, 511-524 (2014).

  27. Hidayetoglu, M. L., Yildirim, K. & Akalin, A. The effects of color and light on indoor wayfinding and the evaluation of the perceived environment. Journal of environmental psychology 32, 50-58 (2012).

 


ddalirnaghadeh@bilkent.edu.tr

syilmaze@purdue.edu semiha@bilkent.edu.tr