A A A Context and Representation: Data Gathering Methodology for Soundscape Contextual Factors Albert Dwan 1 Arup Deutschland GmbH Joachimsthaler Straße 41, 10623 Berlin, Germany ABSTRACT The contextual dimension of subjective perception plays a crucial role in moderating listener evaluations of soundscapes. While Soundscape indicators are directly measured and have numerically determinable measurement uncertainty, the surveying of subjective perception via soundscape descriptors presents methodological challenges pertaining to a wide range of situational moderators which influence descriptor selection. Variations in subjective assessments by listeners have been shown to be correlated with factors such as perceived relevance of a sound source, whether the acoustic environment meets their pre-existing expectations, and familiarity. This paper addresses these methodological challenges by detailing a new data gathering strategy and analysis process for assessing the “interrelationships between person and activity and place, in space and time” (ISO 12913-1:2014). The methodology explored in this paper lends clarity to the dynamics of subjective evaluation within the surveyed population, by instrumentalizing a multi-level model of individual perception of physical environments, drawn from related studies in workplace sociology, anthropology, and organizational studies. This model of perception enables numerical analysis of written responses from listeners about their environment, revealing contextual factors which can moderate subjective perception of the sounds within the environment. This additional data can be applied to the development of conceptual frameworks during soundscape analysis. 1. INTRODUCTION Soundscapes are a relatively novel area of investigation within the broader field of acoustical sciences and design. The ISO 12913 series of standards have formally established Soundscape as a practicable area of acoustic engineering and research, by establishing a fundamental conceptual framework [1] as well as data collection and analysis strategies [2, 3]. These standards define Soundscape as “a holistic approach to the acoustic environment, beyond noise, and its effect on the quality of life”. Furthermore, these standards posit that the study of Soundscapes should “use a variety of data collection related to human perception, acoustic environment and context…[which] relies primarily upon human perception and only then turns to physical measurement” [2]. As discussed in ISO 12913-2:2018 §5.1, there are ongoing methodological challenges around the effective integration of technical-acoustic data and subjective-experiential data in soundscape investigations. This particular section of the ISO standard states that “there is still a significant gap between soundscape descriptors and indicators”, whereby a soundscape descriptor is understood as a term used to describe the perception of an acoustic environment (i.e. “measurement by persons”) 1 albert.dwan@arup.com tin, Til a, “inter.noise worst 9.022 while a soundscape indicator is a physically measurable or derivable property of the acoustic environment (“measurement by instruments”). Common acoustic indicators include L Aeq,T , L Ceq,T , and percentage exceedance levels L AF5,T and L AF95,T . Further psychoacoustic indicators such as sharpness, tonality, roughness and fluctuation strength are often considered in soundscape investigations. In all cases, the determination of soundscape indicators occurs based on numeric data measured with standards-compliant acoustic measurement systems utilizing calibrated microphones. Measurement uncertainty is mathematically determinable under consideration of factors such as the quantity of measurement locations and the duration of each measurement. The degree of accuracy and margins of error can generally be reported along with the data about the indicators themselves. Determining and reporting the accuracy of soundscape descriptors, as well as conceptual frameworks developed through grounded theory analysis, is somewhat more difficult. Unlike soundscape indicators, determining the uncertainty or margin of error for data pertaining to subjective experience is not very precisely regulated by standards or norms. ISO 12913-2 identifies three data collection strategies for subjective data: the Soundwalk, the Questionnaire, and the Guided Interview. ISO 12913-3 Annex B and Annex C discuss methods for analysis of data collected through Questionnaires and Guided Interviews respectively, and Annex E describes “triangulation” methods through which a variety of data sets can be compared with one another to improve the accuracy of the investigation results. By applying parallel methodologies, investigators, or theories to the investigation of a soundscape, the overall level of uncertainty can be reduced [3]. There are a range of studies which explore various dimensions of subjective experience and perception of acoustic environments. While some focus on identifying applicable theories and discussing potential methods through which moderating factors can be understood [4], others focus on defining metrics which can be drawn from the subjective data sets and assessing their correlation with measured indicators [5]. Further studies on methodology are discussed in more detail below. This paper contributes to ongoing methodological developments around documentation and assessment of moderators and contextual influences for subjective perception of acoustic environments. In this paper, an ongoing research initiative is described, which aims to develop a streamlined and time-efficient surveying method based on psychological and sociological theories around the subjective perception of shared environments and common situations. A primary objective of this research was to reduce the sources of error in the data gathering process itself, by asking survey participants to arrange their responses around a multi-level data structure. This allows for survey participants to provide subjective reactions to specific objects, activities, and places which pertain to their own experiences, such that personal opinions can be assessed separately from group dynamics and interpersonal assessments. The research focused on a specific category of indoor acoustic environments, namely the open-plan office, due to the propensity for the acoustic environment in these workspaces to be perceived negatively (often involving complaints of distraction due to human and mechanical sounds). This category of indoor environments was also chosen due to the broad set of existing literature specifically relating to comfort, stress, and subjective experience among open-plan office workers. By comparing how subjective experiences in open-plan offices are characterized in technical- acoustic literature against the way these same experiences are treated in sociological, ethnographic, and organizational studies literature, several theoretical frameworks were identified which could be used to improve the handling of subjective data in soundscape investigations. This comparative literature review allowed for the development of a novel survey interface, which has been subject of a limited number of small-scale tests. 2. PERSONAL AND SITUATIONAL MODERATORS OF SOUND PERCEPTION The subjective perception of acoustic environments and sources is not uniform across all listeners. Many factors can lead to variations in the way acoustic environments are perceived and evaluated by different listeners. In the case of open-plan offices, most of this variation is due to personal, organizational, and situational moderators, rather than physical properties of the acoustic environment. The acoustic design standard VDI 2569:2019 “Sound Protection and Acoustical Design in Offices” states that “only approximately 30% to 40% of the annoyance effect resulting from noise can be explained by technical-acoustic factors. The predominant portion originates from so-called moderators of annoyance… [which include] attitude towards the noisemaker, predictability of the sound event, activity profile of the employee, organizational and business structure, including identification with the business, workload, other environmental factors such as illumination and thermal comfort as well as individual noise sensitivity” [6]. This statement in VDI 2569 is supported by several studies on psychological and personal factors for acoustic discomfort in open offices. A recent literature review [7] identified several factors with statistically significant correlation to noise sensitivity and disruption in the workplace, such as “sense of control”, “ability to screen out noise”, “relevancy of the sounds”, as well as the complex interplay between job role, personality type, and work activities. Another study, which applied soundscape investigation methods to two offices in Ankara, Turkey concluded that negative interpretations of soundscapes can be associated with “factors such as tension, workload, alongside with mood and proximity to sound signal locations…employees become unsatisfied with the soundscape of their work environment due to decreased motivation, fatigue, annoyance, etc.” [8]. While some of these moderating factors can be documented by the soundscape investigator, such as demographic data, organizational structure, and work activity profiles, the individual psychological and interpersonal moderators are more difficult to ascertain with accuracy. The theoretical assumption behind questionnaire-based methodologies is that a soundscape investigator will be able to discern differences in the way various listeners experience their acoustic environment through the adjectives that they choose to associate with their experiences. However, there are latent potentials for error and uncertainty in this approach, since variations in how the listeners personally interpret the descriptors themselves are often not accounted for. Even very common terms such as “loud” and “annoying” do not find uniform agreement with measured sound pressure levels in open offices. An interesting example is found in a study about open-plan office noise levels and annoyance in Egypt from 2010 [9], in which only 10.5% of all survey respondents felt highly annoyed with an ambient noise level L Aeq,8hours = 72.4 dB(A). A background noise level of L Aeq,8hours = 82.6 dB(A) was only reported as highly annoying by 50% of the respondents. Workplace regulations in countries such as Germany define time-averaged sound levels above 80 dB(A) as unsuitable for all work environments, and 55 dB(A) is the recommended maximum for focused intellectual work activities [10]. The cultural and sociological dimensions of this variance in perception of “annoyingly loud” environments would be useful to investigate in greater detail. Soundscape questionnaires commonly presents respondents with a pre-defined list of adjectives or “descriptors” and prompt the respondents to rate the degree to which each adjective correlates with their experience of the soundscape. An investigation by Axelsson, Nilsson, and Berglund [11] presented study participants with fifty recordings of different urban acoustic environments and asked the participants to rate the degree to which 116 unique descriptors correlated with their experience of each recording. Through an eigenvalue decomposition analysis on the intercorrelations between recordings and descriptor ratings, the investigators were able to identify the Principal Components in the respondent data set. Through this Principal Component Analysis (PCA), it was found that variations in respondent ratings for many descriptors were best characterized / predicted by the variation in ratings for smaller, specific set of descriptors. These “principal component” descriptors included Pleasantness, Eventfulness, and Familiarity. The methodology used in the Axelsson, Nilsson, and Berglund study also has some potential for data uncertainty, latent within the data gathering process itself. The survey participants must undertake a cognitive process of interpreting the precise meaning of each descriptor in the context of the listening test, and then cognitively map the interpreted meaning of the descriptor onto their experience of the soundscape. These two interpretive processes can be moderated by personal, situational, sociological, and cultural factors. As mentioned previously, even simple descriptors such as “annoying” can be moderated by a wide range of psychological, organizational, and cultural factors. The meaning of descriptors such as Pleasantness and Familiarity may also be influenced by psycho-social moderators, producing inconsistencies in how each listener understands the terms. Another investigation conducted by Dubois, Guastavino, and Raimbault [12] explored the mediating role of higher-level cognitive and semiotic processes in the subjective evaluation of soundscapes. Their study was a multidisciplinary effort “between acousticians, engineers, psychologists, and linguists to investigate how people give meaning to urban soundscapes on the basis of their everyday experiences ( psychology ), and how individual assessments are conveyed through language as collective expressions ( linguistics )” [12]. Their study identified two main cognitive categories for soundscapes: 1. “Event Sequences”, or sequences of specific and identifiable acoustic events, and 2. “Amorphous Sequences”, in which no specific event or sound source could be isolated. In this study, multiple layers of cognitive evaluation were revealed, hinging around whether sources and event sequences in the soundscape were identifiable. In cases where source / event identification was possible, the subjective evaluation proceeded on a semantic level based on the listener’s evaluation and opinions of the situation itself. In these cases, physical parameters such as loudness only act as modulating dimensions within the cognitive evaluation process. This level of semantic interpretation is also evidenced in a traffic noise study in Tunisia [13], where a strong correlation between traffic noise annoyance and self-reported “fear of cars” was documented – indicating that the cognitive / emotional meaning of the sound source plays a central role in the subjective acoustic perception. Dubois, Guastavino, and Raimbault found that when source identification fails, the listener’s perception relies on more abstract evaluation, whereby judgements about the pleasantness (and similar qualities) of the soundscape assume a more dominant role [12]. Emotional reaction may play a role in both the abstract and semantic evaluation processes. A recent study comparing an Axelsson-type questionnaire against an alternative questionnaire containing descriptors of the emotional salience of the sounds (i.e., how the soundscapes made the listener feel) showed that the alternative questionnaire, which asked participants to rate their own emotional reaction to the soundscape instead of affective descriptions of the soundscape itself, more reliably predicted the positive and negative dimensions of sound perception [14]. In this study, identifiable sound sources (like instruments) as well as amorphous sequences (background noise) were presented, suggesting that emotion plays a role in both semantic and abstract soundscape evaluations. The complex interplay between sound source identity, context, semantic meaning, emotional reaction, and abstract characterization can be analyzed in greater detail through guided interviews and a grounded theory analysis. In a recent study of soundscapes in two museums, it was found that sounds were often evaluated based on their appropriateness to the situation – with a particularly interesting result, that the sounds of children were perceived as more appropriate when the specific exhibition pertained to children [15]. The methodological challenge involved in accounting for all psycho-social and situational moderators of sound perception lies primarily with the conceptual frameworks mobilized in the data gathering process. For example, a higher accuracy in the positive-negative dimensions of sound perception was achieved in the “emotional salience” study by shifting from an external conceptual orientation (descriptors as properties of the soundscape) to an internal conceptual orientation (descriptors as properties of the listener experience) [14]. For soundscape studies in open-plan workplaces, a multi-level conceptual framework would be needed to adequately account for the diverse moderators of perception. These moderators could be arranged into layered, inter-connected categories such as the following: • Spatial and Temporal factors (time of day, location in the office, etc.) • Organizational factors (team structure, workflow, activity profiles) • Semantic factors (perceived relevance / appropriateness of sounds in their context) • Interpersonal factors (attitudes, opinions about colleagues, communication practices, etc.) • Emotional factors (stress, task overload, exhaustion, excitement, etc.) • Acoustic factors & indicators (loudness, sharpness, impulsiveness, etc.) 3. INTERPERSONAL PERCEPTION AND EXPERIENCE IN OPEN OFFICES Personal and situational moderators of sound perception are largely excluded from technical acoustic design guidelines for open-plan offices [16]. However, they are investigated in other fields such as Human Resources Theory, Organizational Studies, and Sociology of the Workplace. One particularly relevant investigative construct used in these fields is the “multi-level analysis”, or the analysis of employee behaviors in the context of different levels of their interaction with the company. In one formulation, the levels are defined as [17]: • Team Level: Interactions and perceptions between an individual and their team. • Leadership Level: Interactions and perceptions between an individual and their group leadership. • Organizational Level: Attitudes, beliefs, and policies that connect an individual to their firm. • Institutional Level: Norms, policies, standards and beliefs that connect an individual to their industry or culture. By assessing employee perceptions on these separate levels, it is possible to identify unique combinations of motivations and beliefs which might influence the employee’s decisions. For example, an employee may feel like a specific behavior is normal on a Team level (“My team always does it this way…”) despite having a contradictory opinion on the Institutional Level (“…although it’s not the standard method”). By utilizing multi-level investigative constructs, researchers have been able to investigate more complex subjective conditions such as “Meaningfulness” of work. As formulated by W.A. Kahn in his article “Psychological Conditions of Personal Engagement and Disengagement at Work” [18], Meaningfulness as an investigative construct is defined as the attribution that energy invested in work will be appropriately compensated in terms of the physical or emotional effort, which renders ‘a feeling that one is receiving a return on investments of one’s self in a currency of physical, cognitive, or emotional energy” [19]. This “compensation” for effort does not necessarily happen on all levels simultaneously – it could occur on a Team level, but not on a Leadership level, causing specific conditions in which teamwork is viewed as positive and meaningful, but the fact of being employed by the company itself is not viewed the same way. Investigative fields such as Social Geography and Actor-Network Theory posit a further aspect which can be useful for investigating Soundscapes. This is the treatment of the physical space itself as playing an active role in the performance & experience of social relations [20]. In these theories, the physical objects and their arrangements carry meanings, based on how they are used in the performance of social relationships. For example, when an individual encounters a locked door, there is a clear “message” which is communicated between the person who locked it (despite their absence from the situation) and the person encountering it. The message is, “you do not have permission to access to this space”, acted out through the locked door itself. Under Action-Network Theory, the distinction between material objects and people is blurred, and it is assumed that “entities do not have inherent qualities, but acquire their form and attribution in, by, and through the performance of their relationships with other entities. Divisions and distinctions are recognized but understood as effects or outcomes rather than essential difference” [20]. Using this theoretical framework, the “meaning” of a sound could be construed as being defined by the role that it plays in a specific instance of social relations within a population. Even though the acoustic properties of the sound itself may not change, the meaning – and thereby people’s opinions about it – are contingent upon the social context in which the sound is experienced, each time it is heard. In a recent Ethnographic study on occupants of an open office, researcher Alison Hirst studied an office in which a “hot desking” policy was implemented, but in practice, desk use was strongly modulated by performances of ownership. Workers who wanted to maintain a sense of stability or continuity in their workplace would “settle” certain desks and populate them with personal objects, to clearly communicate ownership to other colleagues. Colleagues who came late to the office would be aware of this “ownership” regardless of whether the “owner” was present or not – because the social relationship of “possession of the workplace” was performed by the personal objects [21]. Further findings in this study showed that the material environment was not only manipulated on a “team level” to express relationships and territory, but also on an “organizational level”. Design decisions made by the facility owners were often explicit in their desired behavioral outcomes. For example, specific surfaces were designed to be at an incline, so that papers and books could not accumulate on them. The desired outcome was a clean, mess-free office, and this was proposed to be achieved through a combination of the material properties of the environment & the resulting behaviors. In this example, the arrangement of the material environment was intended to express a relationship of control between the organization and the behaviors of individual employees, such that the employees would be driven to maintain clean surfaces [21]. 4. PROPOSED METHOD OF SURVEY-BASED INVESTIGATION The definition of soundscape perception as a measurable phenomenon has an influence on the accuracy and latent uncertainty in the collected data. Particularly important is how the data points themselves are conceived (i.e., as properties of the sounds vs. as properties of the experience). Furthermore, close consideration of the cognitive processes required by study participants during the data gathering process is necessary for identifying factors which may be influencing the gathered data. When using descriptor-based questionnaires, the semantic interpretation of each descriptor, as well as the mapping of those interpreted meanings onto one’s experience, are cognitively intensive processes which introduce psycho-social moderators upstream of the moment of data entry. Guided interviews partially resolve this challenge, by allowing participants to describe their experiences in their own words, reducing the amount of up-stream cognitive work. The data gathered in an interview is a closer approximation of the raw experience of the study participants. However, grounded theory analysis requires extensive interpretation on the part of the analyst, who strives to define a conceptual framework which explains the textual data held in the interview transcripts. The accuracy of this interpretive process can be influenced by psycho-social moderators latent in the investigator themselves. The data gathering method proposed here is a survey-based method which aims to reduce the influence of psycho-social factors during the data entry and analysis processes. This survey method asks participants to describe their experiences in their entirety, and to subsequently provide “ratings” for the objects which they named as part of their experience. In this way, the participants create their own list of objects, sounds, and activities, which they then provide subjective ratings for. The theoretical assumption behind this survey method is that soundscapes should be investigated secondarily, rather than being the primary subject of investigation during the data gathering process. Asking participants to consider sound events and acoustic environments directly requires that they consider those acoustic phenomena as individual cognitive objects, introducing interpretative and semantic evaluation prior to the moment of data entry. If survey participants are simply asked to describe their whole experience, with sounds constituting only one part of the whole range of sensation and emotion, then a richer data set is yielded which more closely approximates their raw experience. By gathering data about the situations that listeners find themselves in, ample data can be collected about all the factors which might influence the evaluation of sounds. This way, sound events can be studied in their perceptual context. The proposed method also assumes a “multi-level” data structure. This means, participants are prompted to provide multiple opinions about individual objects, spaces, and sounds, in a pre- structured way. In the case of open-plan offices, the “personal”, “team”, and “organization” levels defined in the referenced Human Resource studies are applied. Using a prescribed structure for data input does not encumber the data gathering process, since respondents can be asked directly about each “level” of perception (“I think this, but my team thinks that”). Using a structured data gathering approach would, however, accelerate the analysis process, since the gathered data would already be arranged by object, perceptual level, situation, and demographic. For example, if a respondent describes a situation in which they hear a colleagues’ mobile phone ringing and feel distracted by the sound, there are already multiple cognitive objects present which can have subjective experiential properties. These include the colleague, their phone, the ringing sound, and the feeling of distraction. Separating out these objects provides the opportunity to ask the respondent how they feel about each part of the experience, on multiple levels. In this case, the survey participant would be asked: • How do you feel about the phone? • How does your team feel about it? • How does the organization feel about it? By providing numeric ratings on a 5-point scale (from “very negative” to “very positive”), the survey participant can enter data about their feelings and reactions on multiple levels of interrelationship with the phone object. Perhaps they feel negatively that colleagues use mobile phones in the office but recognize that phones are viewed positively by their team and organization. The disagreement between ratings on the “personal” and other levels could be a source of stress, leading to complex semantic evaluations of the phone and the sounds it emits. In this method, the unit of measured data is defined as the association between a cognitive object, its numerical subjective ratings (on 5-point scales, evaluated on three levels), and the cognitive object’s location in space, time, and organizational structure. All data is assumed to describe properties of the internal perceptual realm of each survey participant, instead of describing their external environment. The association with the external environment is secondary – the data is measured within the participant’s description of their experience. At the end of the data gathering process, a databank is available for analysis, which consists of a list of cognitive objects which may appear more than once. The ringing phone, for example, may have been described by different participants in different contexts. The analytic object in this case is the sum total of all subjective ratings for that cognitive object, in all of its iterations. An average subjective rating across a quantity “ n” of the object “phone” can be determined with a simple numeric averaging function: The focus of analysis lies on determining when, where, and how cognitive objects vary in their subjective ratings. This is done through numeric analysis of the 5-point scale data. A phone which is rated negatively in one case, but rated positively in another, can help the analyst identify contextual and situational factors controlling the positive vs. negative aspects of its perception. This data gathering process reduces the amount of textual information the survey participant needs to enter. Rather than providing a narrative response or verbal explanation (as in a guided interview), they can simply focus on providing simple descriptions of the objects, people, sounds, and activities which are included in their experience of the shared workspace. Separating out the cognitive objects and asking for multi-level ratings allows the participants to document their (potentially complex) evaluations of each object, in a systematic way. Their textual responses can be short (free from adjectives and explanations), while the complexity of data remains high (via multi-level evaluations based on individual objects), and the meaning of each object within the network of social relationships in the office can be more effectively captured. Furthermore, the numeric 5-point rating scales provide concrete data and partially reduces the need for interpretation on the part of the analyst. It also provides the opportunity for analyzing group dynamics. For example, in the case of the ringing phone, the analyst has the option to calculate the average “personal” rating for all phones in the entire office and compare that with how any individual respondent rated the “office” experience of each specific phone iteration. This can be calculated using a simple function such as the following: Some respondents may have accurately assessed how the office feels on average, and some respondents may have incorrectly estimated the average feeling. These types of analysis can be indicative of the degree of collective understanding, shared opinions, and effective communication within the surveyed population. 5. TEST OF SURVEY CONCEPT A small-scale test of this survey method was conducted among colleagues at the firm Arup (where the method itself was developed through internal research initiatives) in November 2021. A total of six (6) volunteers participated in this proof-of-concept exercise. All volunteers work regularly in an office in England with a total office population of 60 employees (resulting in a 10% sample size within the office). The participants were asked to give short descriptions of five to six discrete experiences which they had in the office in the last week, and then to provide 5-point scale ratings for each object in their narrative responses, on the three levels “personal”, “team”, and “office”. The scales ranged from -2 to +2, accounting for the ratings “very negative” to “very positive”. The collected database featured a list of objects (each iteration of any object which was narratively identified by any participant), as well as the three 5-point ratings associated with each object, in addition to the situation, activity type, physical location and time of day associated with the narrative response in which the object was described. This allowed for the associations between place and activity in the office to be quickly analyzed, by calculating the percentage of responses which described specific activity types in specific areas of the office. The following table shows the results of this initial analysis into the distribution of activities across office areas: Table 1: Distribution of activities by activity category and environment used. Activity Category Activity Name Environments Used Formal Meetings 69% Appraisals & HR Meetings 100% Meeting Room 31% Technical Coordination Meetings Informal Meetings 69% Non-Project Work (Opportunities, etc) 72% Open-Plan Desk 31% Collaborative Project Work 28% Meeting Room 32% Authoring Reports, Memos, or Proposals 100% Open-Plan Desk 30% Internal Training Individual Work 24% Written Correspondence (email, post, etc) 14% Technical Calculations and Modelling Social Interactions 100% Non-Work Conversations 100% Kitchen Each activity within the office was then evaluated by calculating the average “personal” rating of all objects which were mentioned in those situations, using this function: (3) The average “personal” rating of all objects which were named in each activity category (formal meetings, informal meetings, focused work, etc.) was calculated using this formula. The average personal rating of all objects associated with these activity categories is shown in the following diagram (with all average ratings falling between 0 “neutral” and 1 “positive”). Average Personal Rating of Objects, Filtered by Activity 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 Average Rating Formal Meetings Informal Meetings Individual Focused Work Phone Calls / VTC Social Interactions Activity Category Figure 1: Average personal-level ratings for all objects, filtered by activity type. In Figure 1, it can be seen that the objects involved in informal meetings are more positively rated than the objects involved in formal meetings. To understand the difference between formal and informal meetings, a more detailed assessment was made, looking at the distribution of object ratings in these two activities. These distributions are shown in Figures 2 and 3 below. Distribution of Object Ratings in Formal Meetings 100.00% 90.00% Percent of Rated Objects 70.00% 80.00% 60.00% 50.00% 30.00% 40.00% 20.00% 10.00% 0.00% Very Negative Negative Neutral Positive Very Positive Rating Level Figure 2: Distribution of object ratings within the category “Formal Meetings” Distribution of Object Ratings in Informal Meetings 100.00% 90.00% Percent of Rated Objects 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% Very Negative Negative Neutral Positive Very Positive Rating Level Figure 3: Distribution of object ratings within the category “Formal Meetings” The following table identifies some of the objects which were named in the activities “Informal Meetings” and “Formal Meetings”, as arranged by their rating: Table 2: Distribution of Personal Ratings for Objects in Informal vs. Formal Meetings. Rating Objects in “Informal Meetings” Objects in “Formal Meetings” Negative Microsoft, Frustrated, Connect, Video Conference No One, Available, Answer Neutral Meeting, Meeting Room, Talk, Senior Colleague Doorbell, Ring, Conscious, Disturb, Advice Discuss, Visitor, Enthusiasm, Updates, Hear, Colleagues, Contribute, Feedback, Present, Positive Team, Leadership, Meeting Room, Future, Strategies, Not Disturbed Comment, Review Very Positive -- Support, Colleagues, In-Person, Meet, Brainstorm, Engaging The data gathered from this small exercise provided a basis for evaluating the subjective interrelationships between person, space, objects and activities in the workspace. A preliminary analysis of the data shown here could lead a soundscape investigator to develop a conceptual framework for how context, activity, and space correlate with positive and negative experiences. For example, Table 1 shows that 100% of the formal meetings described by the survey participants took place inside of meeting rooms. Table 2 shows that most of the negative experiences during formal meetings pertain to the difficulty of using the video conferencing technology. The positive dimensions of these experiences pertain to engagement in organizationally significant discussions (as evidenced by words like Leadership, Future, Strategy), and are particularly supported by the acoustic isolation afforded by the meeting room walls (as shown in the positively rated term “not disturbed”). As shown in Figure 1, formal meetings carry the lowest average object rating across all activity types. Combined with the data in Table 2, one could propose that workers feel it necessary to undergo the stressful interactions inside an acoustically isolated space, which is beneficial, but struggle to use the conferencing technology, thereby degrading the overall experience. The factors driving the subjective experience of informal meetings are different from those in formal meetings. As shown in Table 1, the majority of informal meetings described by participants occur in the open-plan work area. This results in exposure to a range of sound sources such as the doorbell and colleagues’ conversations. Table 2 shows that the unattended doorbell is a particularly distracting element of the acoustic environment. However, per Figure 3, only a few objects were rated negatively in the informal meeting, with a larger set of objects being rated positively. Most of these objects have to do with teamwork, and the highest ratings are associated with terms like “brainstorm” and “support”. The sounds of human activity are consistently rated positively here. The information gathered in this small example study provides numerical data, associated with specific objects in specific situations, which allows for efficient and uncomplicated analysis of the conditions leading to positive vs. negative experiences in a shared acoustic environment. By analyzing the entire experiential context, and only searching for sound sources or acoustic events during the analysis phase, it is easier to identify the contextual factors driving positive vs. negative evaluation of those sounds. This type of data can be useful in developing more reliable conceptual frameworks that can reinforce “triangulated” methods that integrate acoustic measurements, binaural recordings, and guided interviews in soundscape studies. 6. ACKNOWLEDGEMENTS The anonymous participants of the survey at Arup were very generous with their time, and the author is grateful for their participation. Special thanks is also owed to Alison Hirst and Nigel Oseland for their collaborative conversations during the earlier phases of this research. 7. REFERENCES 1. ISO/FDIS 12913-1:2014(E) “Acoustics – Soundscape – Part 1: Definition and conceptual framework” 2. ISO/TS 12913-2:2018(E) “Acoustics – Soundscape – Part 2: Data collection and reporting requirements” 3. ISO/TS 12913-3:2019 “Acoustics – Soundscape – Part 3: Data Analysis” 4. Schulte-Fortkamp, B. (2002) Soundscapes and Living Spaces Sociological and Psychological Aspects Concerning Acoustical Environments. 5. Liu, J., Kang, J. Soundscape Design in City Parks: Exploring the Relationships Between Soundscape Composition Parameters and Physical and Psychoacoustic Parameters. Journal of Environmental Engineering and Landscape Management , 23(2) 2015 102-112. 6. VDI 2569:2019 “Sound Protection and Acoustical Design in Offices” 7. Oseland, N., and Hodsman, P. The Response to Noise Distraction by Different Personality Types: An Extended Psychoacoustics Study. Corporate Real Estate Journal, 2020, Volume 9, No. 3, P. 1-19 8. Acun, V. and Yilmazer, S. A Grounded Theory Approach to Investigate the Perceived Soundscape of Open-Plan Offices. Applied Acoustics (Elsevier Ltd), 2018, 131, 28-37 9. Abas Ali, S. Open-Plan Office Noise Levels, Annoyance and Countermeasures in Egypt. Noise Control Engineering Journal, 59(2), March-April 2011 . 10. Technische Regeln für Arbeitsstätten, ASR V3, „Gefährdungsbeurteilung“, Juli 2017 11. Axelsson, Ö., Nilsson, M., Berglund, B. A Principal Components Model of Soundscape Perception. The Journal of the Acoustical Society of America , 128(5), 2010, 2836-2846. 12. Dubois, D., Gustavino, C., Raimbault, M. A Cognitive Approach to Urban Soundscapes: Using Verbal Data to Access Everyday Life Auditory Categories. Acta Acustica , 2006 Volume 92, P. 865-874. 13. Bouzid, I., Derbel, A., Boubaker, E. Factors responsible for road traffic noise annoyance in the city of Sfax, Tunisia. Journal of Applied Acoustics , 2020 Volume 168. Elsever Ltd. 14. Massimiliano, M., Maffei, L., Iachini, T., Rapuano, M., Cioffi, F., Ruggiero, G., Routolo, F. A Questionnaire Investigating the Emotional Salience of Sounds. Applied Acoustics , 182 (2021), 108281. 15. Orhan, C., Yilmazer, S. Harmony of Context and The Built Environment: Soundscapes in Museum Environments Via Grounded Theory. Applied Acoustics , 173 (2021) 107709. 16. British Council for Offices – 2019 “Guide to specification – Best Practices for Offices” 17. Thomas, A.N., Parker, S., Zacher, H., Ashkanasy, N.M. Employee Green Behavior: A Theoretical Framework, Multilevel Review, and Future Research Agenda, Organization & Environment, Sage Publications , 2015, Volume 28, 103-125. 18. Kahn, W.A. Psychological Conditions of Personal Engagement and Disengagement at Work, Academy of Management Journal , 1990, 33, 692-724. 19. Soane, E., Shantz, A., Alfes, K., Truss, C., Rees, C., Gatenby, M. The Association of Meaningfulness, Well-Being, and Engagement with Absenteeism: A Moderated Mediation Model. Human Resource Management , May-June 2013, Volume 52, No. 3 20. Hirst, A. and Humphreys, M. Putting Power in its Place: The Centrality of Edgelands. Organization Studies , 2013 34 (19) pp. 1505-1527. 21. Hirst, A. Settlers, Vagrants and Mutual Indifference: Unintended Consequences of Hot- Desking, Journal of Change Manage-ment, 2011, Vol. 24 Issue 6, pp. 767-788. Previous Paper 224 of 769 Next