A A A Volume : 44 Part : 2 Speech enhancement for helicopter headsets with an integrated ANC-system for FPGA-platformsJohannes Timmermann 1Florian Ernst 2Delf Sachau 3Helmut-Schmidt-University / University of the Federal Armed Forces Hamburg Holstenhofweg 85 22043 Hamburg / GermanyABSTRACT During flights, helicopter pilots are exposed to high noise levels caused by rotor, engine and wind. To protect the health of passengers and crew, noise-dampening headsets are used. Modern active noise control (ANC) headset can further reduce the noise exposure for humans in helicopters. Internal or external voice transmission in the helicopter must be adapted to the noisy environment and speech signals are therefore heavily amplified. To improve the quality of communication in helicopters speech and background noise in the transmitted audio signals should be separated. Subsequently the noise components of the signal are eliminated. One established method for this type of speech enhancement is spectral subtraction. In this study, audio files recorded with an artificial head during a helicopter flight are used to evaluate a speech enhancement system with additional ANC capabilities on a rapid prototyping platform. Since both spectral subtraction and the ANC algorithm are computationally intensive, an FPGA is used. The results show a significant enhancement in the quality of the speech signals, which thus lead to improved communication.1. INTRODUCTIONClear and understandable communication is important for the safe operations of flights. In helicopters, high levels of noise are generated by the rotor and engine of the aircraft and the aerodynamic noise. The properties and the magnitude of the noise changes depending on flight parameters and the helicopter model [1–4]. Modern helicopters are already constructed to reduce noise exposure for humans inside and outside of the helicopters. Adequate hearing protection remains necessary to prevent hearing damage [4]. For this reason, noise dampening headsets are an industry standard. The headphone parts of the headsets encase the ear to passively reduce the external noise. Modern flight headsets or helmets can be improved by ANC techniques to further1 johannes.timmermann@hsu-hh.de2 ernstf@hsu-hh.de3 delf.sachau@hsu-hh.dea slaty. inter.noise 21-24 AUGUST SCOTTISH EVENT CAMPUS O ¥, ? GLASGOW enhance noise reduction capabilities [2]. While there are commercially available ANC headsets, their performance can still be improved, as shown in [5]. More challenges occur when human communication is factored in. The Intercom system is not only used to communicate to air tra ffi c control, but is also used for internal communication in the helicopter. The speech signals from within the helicopter are recorded with the microphones of the headset that are located directly in front of mouths of the passengers. For these microphones, the RTCA DO-214A flight norm specifies a bandpass characteristic of 500 Hz to 6000 Hz for aviation microphones [6]. Even if the bandpass characteristic of the microphones is considered, these speech signals are corrupted by parts of the background cabin noise. The noisy speech signals are then transmitted to all active flight headsets in the helicopter or via radio transmission. To counteract permanent noise exposure, a noise gate is normally used to only transmit signals when a passenger is speaking. In addition, headsets can be equipped with a second microphone that records primarily cabin noise, to subtract parts of the noise signal in the time domain. While improving the understandability of speech and lowering the noise levels, these techniques lower the quality of speech and can be still improved [7]. In this study a high-performance combined speech enhancement and ANC system for FPGA is presented and tested with recorded audio data from an H120 B helicopter. The algorithm for the speech enhancement is based on spectral subtraction and the ANC-system is using the Filtered-x Least Mean Square (FxLMS) algorithm in feedforward configuration. While FPGA implementations of spectral subtraction or the FxLMS algorithm are present in the literature, using a combination of both algorithms for a flight headset is a novel approach. The FxLMS algorithm is a well-known tool for ANC application over the last decades. There are several studies that show ANC hardware designs in various configurations for headphones. The designs are ranging from classical feedforward and feedback to hybrid filter approaches. Vu et al. are showing major advantages in power consumption and processing speed of hardware-based in- ear ANC-headphones [8, 9]. Rivera Benois et al. present and test a hybrid filter structure on an FPGA platform. Their implementation is characterized by very low latencies that are comparable to analogue circuits while providing good noise attenuation [10, 11]. Khan et al. [12] suggest a block LMS algorithm for ANC-headphones that is optimized for resource utilization. These studies clearly proof that hardware-based ANC designs are a feasible solution for headphones to reduce low frequency noise. In contrast to commercially available ANC-headphones or flight headsets, a custom hardware design can be adapted to the specific area of application and can be co-designed with the speech enhancement system. There are several researcher groups presenting hardware-based implementations of spectral subtraction. Whittington et al. presents a speech enhancement method for in car speech recognition on a Xilinx Virtex-4 FPGA [13]. They show a very resource e ffi cient design implemented with Matlab Simulink and the Xilinx System Generator that was validated against a software implementation. The audio files in their experiment had to be converted to raw binary files before they were sent to the FPGA. Amornwongpeeti et al. performed a simulation with the Xilinx System Generator in Matlab Simulink [14]. A dual channel configuration for spectral subtraction was used. While the results are looking promising and the authors claim that the design is suitable for hands-free real-time applications, no measurements were performed. Biswas et al. are presenting an implementation for noise estimation and basic spectral subtraction on a hardware platform in [15]. They are also using the Xilinx System Generator and the implementation is synthesized for a Xilinx Spartan-6 FPGA . In [16] they extend their previous work and propose a dual microphone setup using multi-band spectral subtraction on the same hardware platform. Their results show that a hardware design can be a feasible solution for handheld devices performing speech enhancement. A pipelined architecture of multi-band spectra subtraction is suggested from Bahoura in [17]. It uses a Xilinx Artix-7 FPGA , the design is implemented with the Xilinx System Generator . The implementation is designed for maximum throughput and is processing audio files loaded into the Matlab workspace. While there are several studies implementing spectral subtraction for FPGA platforms, most of them are clearly in an early development phase. In the following contribution the implementation of the combined speech enhancement and ANC- system for a flight headset will be presented and tested.2. IMPLEMENTATIONThe proposed design in this study serves two major purposes: Reduce the noise levels at the ears of the passengers and enhance the transmitted speech that is recorded by the dual microphones of the headsets. The following chapter is therefore divided into two parts.2.1. Active noise control The ANC-system is implemented in feedforward configuration. Reference microphones are placed on the outside of each ear cup and the error microphones are installed on the inside. Since left and right side of the headset have minor acoustic coupling the system can be handled separately with Single Input Single Output (SISO) systems, like shown in [5]. To account for secondary path e ff ects the Filtered-x Least Mean Square (FxLMS) algorithm is used. A block diagram of the FxLMS is shown in Figure 1. P ( z ) describes the physical primary system. The reference signal x ( n ) is measured andFigure 1: Block diagram of the SISO FxLMS [18]convoluted with ˆ S ( z ), which is an estimate of the of the secondary path S ( z ). The LMS algorithm is also using the error signal e ( n ) to optimize the coe ffi cients of the adaptive filter W ( z ). The output of the filter y ( n ) is changed by the secondary path to y ′ ( n ) and subtracted from d ( n ). Equation 1 describes the updates of the adaptive filter coe ffi cients w , where µ is the step size.w ( n + 1) = w ( n ) − µ e ( n ) x ′ ( n ) (1)To improve convergence speed and minimize steady state error of the algorithm, a normalization of the step size can be performed [18]:µ ( n ) = α L max[ ˆ P x ( n ) , P min ] , 0 < α < 2 . (2)α is called the normalized step size. L is the length of the filter and ˆ P x ( n ) is an averaged power estimate of x ( n ). It is calculated according to 3. β is a smoothing parameter, that influences how much new information is taken into the estimation of ˆ P x ( n ) in each time step.ˆ P x ( n ) = (1 − β ) ˆ P x ( n − 1) + β x 2 ( n ) . (3)The last used improvement to the FxLMS algorithm is the leakage factor ν ( n ):ν ( n ) = 1 − µ ( n ) γ, 0 < γ < 1 . (4)x(7) d{(n) e(n) ¢ > We) A S(z) x’(n) LMS It is weighted with the factor γ and prevents infinite growth of the adaptive filter coe ffi cients and therefore limits the output power of the algorithm [18]. The final update equation of the normalized FxLMS with leakage factor is written down in Equation 5.w ( n + 1) = ν ( n ) w ( n ) − µ ( n ) e ( n ) x ′ ( n ) (5)The FxLMS algorithm can get computational expensive when the ANC application requires high sampling rates with long filter lengths or a high channel count. A high performance parameterizable hardware implementation of the normalized FxLMS with leakage factor is presented and validated in feedback configuration in [19] and [20]. The authors also provide validation data and new possibilities for MIMO systems in [21]. The design is written with e ffi cient fixed-point notation and is free of intellectual property cores. For this study, the VHDL code base from the previous studies is used and adapted to the headset that uses two separate SISO feedforward ANC-systems, one for each ear. There are some minor changes to the design, because it was changed from a feedback to a feedforward configuration. The VHDL code is integrated into a Matlab Simulink model using the Xilinx System Generator black box block. While some parameters like the number of static and adaptive filter coe ffi cients, bit widths of the signals and number of channels must be set static before syntheses, most parameters can be changed dynamically during runtime. The configuration used in this study will be later explained and discussed.2.2. Speech Enhancement To enhance the quality of the transmitted speech signals spectral subtraction is used. The recorded audio files from the helicopter include multiple audio channels from various positions in the cabin. For the speech enhancement a dual microphone setup should be used as follows: The first signal y se ( n ) is recorded right in front of the mouth speaker of the artificial head. It contains speech s se ( n ) and background noise d se ( n ). In the time domain this can be expressed as:y se ( n ) = s se ( n ) + d se ( n ) (6)To get an accurate representation of the background noise d se ( n ) another microphone is used and installed facing away from the artificial head. The noisy speech signal can be transformed into the frequency domain by using the Fourier transform.Y se ( f ) = S se ( f ) + D se ( f ) (7)For the FFT the time series need to be bu ff ered and separated into frames, where m is the number of the frame and k the frequency bin:Y se ( k , m ) = S se ( k , m ) + D se ( k , m ) . (8)Since the dual microphone setup already provides all necessary information, spectral subtraction can be performed according to | ˆ S se ( k , m ) | = | Y se ( k , m ) | − ζ D se | ( k , m ) | , (9)to get an estimate of isolated speech components [22]. | ˆ S se ( k , m ) | is the estimated amplitude of the m − th time frame and k − th frequency bin. The parameter ζ is used to tune how much of the noise spectrum should be subtracted. If the parameter ζ = 1 a full subtraction, for ζ > 1 an over-subtraction is performed. Due to measurement errors, local variations of the noise spectrum or over-subtraction, this method may produce negative estimates of the magnitude spectrum. To prevent this behaviour a mapping function T [ . ] is used according to | ˆ S se ( k , m ) | if | ˆ S se ( k , m ) | > η | Y se ( k , m ) | η | Y se ( k , m ) | otherwise . (10)T [ | ˆ S se k , m ) | ] = The parameter η is used to tune the residual noise floor [23]. Since the phase information only has a minor e ff ect on speech intelligibility, an estimate of the speech signal ˆ S se ( k , m ) can be calculated by using the original phase information of Y se ( k , m ). The spectrum is then transformed back into the time domain resulting in the estimated speech signal ˆ s se ( n ). An overview of the signal flow for a hardware implementation of the basic spectral subtraction algorithm, as described in Equation 6 to 10, is shown in Figure 2.y se ( n ) Y se ( k,m )Windowing & Zero Padding Basic Spectral SubtractionRAM BufferFFTS se ( k,m ) D se ( k,m ) d se ( n ) s se ( n )Overlap-Add RAMFFT Windowing & IFFT RAM BufferZero PaddingFigure 2: Block diagram of a dual microphone basic spectral subtraction algorithmIn contrast to the FxLMS algorithm the basic spectral subtraction is implemented in Matlab Simulink using the Xilinx System Generator . Most of the parameters of the design can be set with a parameter file that can be executed as a Matlab script. The implementation uses Xilinx intellectual property cores, for example for the FFT algorithm. Figure 3 gives a broad overview about the design.Figure 3: Overview of the Matlab Simulink and Xilinx System Generator design of the basic spectral subtraction implementationThe implementation uses a total of four dual Random-Access-Memory (RAM) blocks of configurable length. The size of the dual RAM depends on the length of investigated time frames. Two RAM blocks are used to bu ff er the signals s se ( n ) and d se ( n ). Driven by a control logic (magenta colored block) they also provide a fixed 50% overlap when enabled. After bu ff ering and overlapping of the input signals a Hann Window, saved in Read-Only-Memory (ROM), is applied (green block). The Xilinx Logicore IP FFT core is used twice in the design and are configured in pipelined mode for maximum throughput. The first core transforms the bu ff ered and windowed signals in the frequency domain. With the variable FFT length of the core zero padding can be applied. To save hardware resources, both time series are processed in serial within one FFT core. This is achieved by delaying the noise signal. Since the FPGA used in this study runs at 100 MHz and the targeted sampling rate is only F s = 20 KHz, the output of the FFT core is still within the same time step of the sampling rate. The FFT block delivers the signals Y se ( k , m ) and D se ( k , m ) that are than processed by the spectral subtraction algorithm (cyan colored block) according to Equation 9 and 10. The parameters ζ and‘Clock Generatorcounter Data New > ast, ‘al eet Ese bitshit for speech FPGA Setup 7 el ‘Dual RAM Control for Overlap bishif for noise & System Generator FFT contig aays B no imag part frame delay btw inputs 1 Tee fare diay btw in Port RAM for Buffering and Overlpt frame dolay btw inputs 4 TORTS SGRTTNORT Windowing Noise optional tast FFT Xlinx IP Core ‘Spectral Substacbon bith ti ey bate ea optional last 1 IFFT Xainx IP Core ual Por] RAM or even ames a ‘Overlap Add RAM Control 4 fe Dual Port RAM for odd frames speech and noise >> 16 | sgh shit orf pen and od traasow se η are configured as inputs and can be changed by the user during runtime. The design uses mostly fixed-point notation except for parts of the spectral subtraction. It makes use of the divider IP core of Xilinx , which is configured for floating point calculations. As a result, an estimate of clean speech ˆ S se ( k , m ) is then transformed back into the time domain by the FFT core configured as a discrete inverse FFT. In case windowing and a 50 % overlap was used, the signal needs to be reconstructed with the overlap-add method. This is done by using the two additional dual RAM blocks, that are controlled by a second control logic (magenta colored block). The dual Ram is also needed to de-bu ff er the signals to generate the discrete time series ˆ s se ( n ). The cleaned speech output can then be transmitted to an external device or is saved into the workspace.3. EXPERIMENTAL SETUP AND RESULTSThe audio signals used in this study are recorded during a test flight in an H120 B helicopter. A Bruel & Kjaer 5128 head and torso simulator was placed next to the pilot in the helicopter, like shown in Figure 4 (left). Data was recorded during a one-hour flight with additional microphones placed in front of the mouth and ears of the head and torso simulator and in the cabin.WRT eyFigure 4: Setup of the head and torso simulator in the helicopter (left) and in the anechoic chamber (right)A representative audio file is played via the mouth simulator of the head and torso simulator to simulate intercom communication. A 3D-printed capsule for two microphones is placed in front of the head simulator’s mouth. One of the microphones is pointed at the mouth to record speech and the other microphone is pointed in the opposite direction to record mainly background noise. The recordings from the dual microphone setup are used in this study for the speech enhancement algorithm. Due to safety concerns and limitations of time and the equipment, the validation of the speech enhancement and the ANC-system had to be performed after the flight in a laboratory. The head and torso simulator is placed in an anechoic chamber certified according to DIN EN ISO 3745 for frequencies above 100 Hz. The rapid prototyping platform used in this study is a dSPACE MicroLabBox . It has integrated 16-bit Analog-to-Digital Converters (ADC) and Digital-to-Analog Converters (DAC) that can operate up to 1 MHz. The integrated CPU is a Freescale QorlQ P5020 with 2 cores up to 2 GHz and the FPGA is a Xilinx Kintex-7 XC7K325T clocked at 100 MHz. As the primary source an E12 loudspeaker in combination with a D6 amplifier from d & b is used. The recorded helicopter noise is played by a Bruel & Kjaer 3160-A-042 frontend which is also being used for data analysis. The headset for the ANC-system is a low-cost Trust Gaming GXT 414 consumer headset. It was modified as described in [5] with two error microphones and two reference microphones ( Brüel & Kjaer 4958 ). Signal conditioning is performed with by a ( Brüel & Kjaer 2694-A ). A picture of the measurementeee : "eee Pees PII OI, anaes eee See a Se RPO A Pe ey ae B jeer High-frequency Head and Torso simulator Type 5128 =“ & 7) b=) = (ea) Se SS Sees a cee Oy Mice nent t tie POU Nees Oey setup with the acoustic head head and torso simulator is shown in Figure 4 (right). The necessary system identification for the secondary paths of the FXLMS is performed o ffl ine before runtime with an LMS algorithm on the FPGA platform. Table 1 gives an overview about the parameters for the implementation of the combined speech enhancement and ANC-system.Table 1: Parameters of the normalized FxLMS and spectral subtraction algorithmF s ADC / DAC Static coe ffi cients Adaptive coe ffi cients α β20 KHz 16-bit 128 128 0.005 0.1γ P min Frame length Frame Overlap ζ η0.0001 0.00001 512 samples 50% 1.1 0.1ANC and speech enhancement are running both at 20 KHz and are synchronized by a shared clock divider. The frame length for the FFT is set to 512 samples, which results in time frames of 25.6 ms. The full 16-bit precision of the ADCs / DACs is used. 128 static filter coe ffi cients are su ffi cient to represent the secondary path. The adaptive filter coe ffi cients were tested in configuration of 128 up to 512, without significant gains in noise reduction performance, therefore only 128 coe ffi cients are used for e ffi ciency. The parameters for the FxLMS are empirically chosen for a balance between stability and convergence speed, while ζ and η are selected for a slight over subtraction and a -20 dB noise floor [23]. Table 2 gives an overview of the recourse requirements of the implementation.Table 2: Resource utilization of the Xilinx Kintex-7 XC7K325TType Used Available UtilizationConfigurable Logic Block Slices 9782 50950 19.20 %Configurable Logic Block Slice LUTs 28681 203800 14.07 %Configurable Logic Block Slice Flip-Flops 29342 407600 7.20 %Block RAM Blocks 32 Kb 1 445 0.22 %Block RAM Blocks 16 Kb 23 890 2.58 %DSP Slices 163 840 19.40 %The design uses less than 20 % of all available resource of the Xilinx Kintex-7 XC7K325T . The most used resource are the DSP slices. Similar findings were already reported when only the FxLMS implementation is considered [19–21]. It should be noted that the resource report already includes two SISO feedback FxLMS algorithms, one for each side of the headset. When the sampling rate of 20 KHz is considered it would be impossible to run an equivalent software implementation in real- time on the CPU of the dSPACE MicroLabBox . The results of the speech enhancement and ANC-system will be discussed separately in the following paragraphs. Figure 5 shows the time series and a spectrogram of the original noisy speech signal y se ( n ) (top) and the enhanced speech signal s se ( n ) (bottom). A visual inspection of the original time series indicates high noise levels caused by the helicopter during flight. Speech segments are hardly distinguishable from speech breaks. The enhanced speech signal at the bottom shows very distinct speech events and a low residual noise floor in speech breaks. The two spectrograms on the right-hand side of Figure 5 provide additional information in the frequency domain. The original noisy time series shows low frequency noise and dominant Frequency in KHz14Amplitude02-100 2 4 6 8 10 Time in s0 2 4 6 8 10 Time in sFrequency in KHz14Amplitude02-100 2 4 6 8 10 Time in s0 2 4 6 8 10 Time in sFigure 5: Results of the basic spectral subtraction (original speech and noise signal at the top, enhanced speech signal at the bottom)tonal frequencies around 1200 Hz and 1800 Hz throughout the 10 s recording. The spectrogram of the enhanced speech signal in the bottom shows a significant reduction of the low frequency noise. In comparison to the use of a high or band pass filter, low frequency speech components stay mostly untouched. The dominant tonal events at 1200 Hz and 1800 Hz are also reduced in magnitude, while still being visible. Overall, the speech enhancement reduces noise components from the signal e ff ectively. This is also confirmed by listening to the enhanced audio signal. It should be pointed out, that as a result of using basic spectral subtraction some audible ’musical noise’ is present. This e ff ect is one of the known disadvantages of spectral subtraction [23]. In a second step the performance of the ANC-system will be discussed. The results in form of a spectrum of the Sound Pressure Level (SPL) is shown in Figure 6.120No HeadsetANC Off100ANC ONSPL in dB8060400 500 1000 1500 2000 Frequency in HzFigure 6: SPL on the right ear without headset (blue), headset on the head but ANC o ff (red), ANC- headset active (yellow)As the measurement setup is symmetric regarding the left and right channel, only results of the right side are discussed. Since the anechoic chamber is not certified for measurements of frequencies below 100 Hz and other limitations of the used setup, the SPL below 100 Hz is reduced compared to measured sound in the helicopter. The blue line represents a baseline measurement with the helicopter noise played in the anechoic chamber measured with the right ear microphone of the head and torso simulator without the headset in place. The two vertical black lines indicate the frequency range between 100 Hz and 100 Hz. The spectrum shows a high SPL over a wide range of frequencies with tonal spikes at 600 Hz, 1200 Hz, 1450 Hz, 1700 Hz and 1800 Hz. These spikes correspond to multiples of the engine and rotor frequencies of the H120 B helicopter. Due to the slight non-stationary character of the recording, some variations of the spectrogram can be observed between measurements. The red line in Figure 6 represents a measurement where the headset was on the artificial head, with ANC turned o ff . It shows the passive dampening of the headset, which occurs above 400 Hz and further increases with higher frequencies. The results with the ANC-system active are represented by the yellow line. Between 100 Hz and 1000 Hz the ANC-system significantly reduces the SPL compared to just the passive dampening of the headset. The average reduction of the SPL compared to the passive dampening of the headset in that frequency range is 14.6 dB with the ANC-system active. On average there is no measurable benefit of the ANC-system above 1000 Hz. While at some frequencies, for example at 1450 Hz, the SPL is reduced, at other frequencies the SPL increases. As the relevant information is below 2 KHz, the diagram is limited to this frequency span. These observations are consistent with the knowledge about commercially available headsets. Reducing the SPL at higher frequencies is not a priority for ANC-headset, due to the acceptable passive noise dampening above 1000 Hz.4. CONCLUSIONIn this study an implementation of a speech enhancement algorithm for a dual microphone flight headset with ANC capabilities for an FPGA platform is presented and tested. The results show a significant improvement of the speech signals enhanced by the basic spectral subtraction algorithm implemented on an FPGA. In addition, the feedforward ANC implementation reduces the SPL below 1000 Hz e ff ectively. The presented e ffi cient implementation uses less than 20 % of the available hardware resources of the Kintex-7 FPGA for algorithms that would be challenging to run in real-time on a software based platform. Future studies could implement and review a more complex algorithm of speech enhancement. To further improve noise reduction the audio setup can be improved, since the headset used in this study only provides low levels of passive noise dampening and is not designed for high noise levels.ACKNOWLEDGEMENTSThis research paper is partially funded by dtec.bw – Digitalization and Technology Research Center of the Bundeswehr in the project MissionLab which we gratefully acknowledge.REFERENCES[1] Marius Deaconu, Grigore Cican, Adina-Cristina Toma, and Luminit , a Ioana Dr˘ag˘as , anu. Helicopter Inside Cabin Acoustic Evaluation: A Case Study — IAR PUMA 330. International Journal of Environmental Research and Public Health , 2021. [2] Yong Chen, Sebastian Ghinet, Andrew Price, Viresh Wickramasinghe, and Anant Grewal. Investigation of aircrew noise exposure levels and hearing protection solutions in helicopter cabin. Journal of Intelligent Material Systems and Structures , 28(8):1050–1058, 2017. [3] Sebastian Ghinet, Andrew Price, Viresh Wickramasinghe, Yong Chen, and Anant Grewal. Cabin noise exposure assessment of the Royal Canadian Air Force CH-147F helicopter through flight testing. In Proceedings of the INTER-NOISE 2016 - 45th International Congress and Exposition on Noise Control Engineering: Towards a Quieter Future , pages 3455–3465, 2016. [4] Thomas Küpper, Paul Jansing, Volker Schö ffl , and Simone Van Der Giet. Does Modern Helicopter Construction Reduce Noise Exposure in Helicopter Rescue Operations? The Annals of Occupational Hygiene , 57(1):34–42, 2012. [5] Florian Ernst, Sten Böhme, and Delf Sachau. Headset mit aktiver Schallreduktion für Hubschrauberpiloten. Wien, 2021. Fortschritte der Akustik - DAGA 2021. [6] RCTA. RTCA: DO-214A Audio Systems Characteristics and Minimum Operational Performance Standards for Aircraft Audio Systems and Equipment - Available: www.rtca.org, 2013. [7] Florian Ernst, Delf Sachau, and Sten Böhme. Verbesserung der Sprachverständlichkeit von Headsets mit aktiver Schallreduktion für Hubschrauberpiloten. Stuttgart, 2022. Fortschritte der Akustik - DAGA 2022. [8] Hong-Son Vu, Kuan-Hung Chen, Shih-Feng Sun, Tien-Mau Fong, Che-Wei Hsu, and Lei Wang. A 6.42 mW low-power feed-forward FxLMS ANC VLSI design for in-ear headphones. In 2015 IEEE International Symposium on Circuits and Systems (ISCAS) , volume 2015-July, pages 2585–2588. IEEE, 2015. [9] Hong-Son Vu and Kuan-Hung Chen. A High-Performance Feedback FxLMS Active Noise Cancellation VLSI Circuit Design for In-Ear Headphones. Circuits, Systems, and Signal Processing , 36(7):2767–2785, 2017. [10] Piero Rivera Benois, Patrick Nowak, and Udo Zölzer. Fully digital implementation of a hybrid feedback structure for broadband active noise control in headphones. In 24th International Congress on Sound and Vibration, ICSV 2017 , 2017. [11] Piero Rivera Benois, Udo Zölzer, and Veatriki Papantoni. Psychoacoustic hybrid active noise control structure for application in headphones. In 25th International Congress on Sound and Vibration 2018, ICSV 2018: Hiroshima Calling , volume 2, pages 914–921, 2018. [12] Mohd Tasleem Khan and Rafi Ahamed Shaik. High-Performance Hardware Design of Block LMS Adaptive Noise Canceller for In-Ear Headphones. IEEE Consumer Electronics Magazine , 9(3):105–113, 2020. [13] Jim Whittington, Kapeel Deo, Tristan Kleinschmidt, and Michael Mason. FPGA implementation of spectral subtraction for automotive speech recognition. In 2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems , volume 2008, pages 72–79. IEEE, 2009. [14] Sarayut Amornwongpeeti, Nobutaka Ono, and Mongkol Ekpanyapong. Design of FPGA- based rapid prototype spectral subtraction for hands-free speech applications. In Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia- Pacific , pages 1–6. IEEE, 2014. [15] Tanmay Biswas, Chandrajit Pal, Sudhindu Bikash Mandal, and Amlan Chakrabarti. Audio de- noising by spectral subtraction technique implemented on reconfigurable hardware. In 2014 Seventh International Conference on Contemporary Computing (IC3) , pages 236–241. IEEE, 2014. [16] Tanmay Biswas, Sudhindu Bikash Mandal, Debasri Saha, and Amlan Chakrabarti. FPGA based dual microphone speech enhancement. Microsystem Technologies , 25(3):765–775, 2019. [17] Mohammed Bahoura. Pipelined Architecture of Multi-Band Spectral Subtraction Algorithm for Speech Enhancement. Electronics , 6(4):73, 2017. [18] Sen M. Kuo and Dennis R. Morgan. Active Noise Control Systems: Algorithms and DSP Implementations . John Wiley & Sons, Inc., 1996. [19] Alexander Klemd, Marcel Eckert, Bernd Klauer, Jonas Hanselka, and JDelf Sachau. A Parameterizable Feedback FxLMS Architecture for FPGA Platforms. In Proceedings of the 10th International Symposium on Highly-E ffi cient Accelerators and Reconfigurable Technologies - HEART 2019 , pages 1–4, New York, USA, 2019. ACM Press. [20] Johannes Timmermann, Alexander Klemd, Jonas Hanselka, Delf Sachau, and Bernd Klauer. Validation and performance analysis of a parameterizable normalized feedback FxLMS architecture for FPGA platforms. In INTER-NOISE and NOISE-CON Congress and Conference Proceedings , pages 653–661, Seoul, 2020. [21] Alexander Klemd, Bernd Klauer, Johannes Timmermann, and Delf Sachau. A Flexible Multi- Channel Feedback FxLMS Architecture for FPGA Platforms. In 2021 31st International Conference on Field-Programmable Logic and Applications (FPL) , pages 319–326. IEEE, 2021. [22] Jacob Benesty, Jingdong Chen, and Emanuël A.P. Habets. Speech Enhancement in the STFT Domain . Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. [23] Saeed V. Vaseghi. Advanced Signal Processing and Digital Noise Reduction . John Wiley & Sons, B.G. Teubner, 1996. Previous Paper 399 of 808 Next