Exploring the differences between an immature and a mature human auditory system through auditory late responses in quiet and in noise

Children are disadvantaged compared to adults when they perceive speech in a noisy environment. Noise reduces their ability to extract and understand auditory information. Auditory-Evoked Late Responses (ALRs) offer insight into how the auditory system can process information in noise. This study investigated how noise, signal-to-noise ratio (SNR), and stimulus type affect ALRs in children and adults. Fifteen participants from each group with normal hearing were studied under various conditions. The findings revealed that both groups experienced delayed latencies and reduced amplitudes in noise but that children had fewer identifiable waves than adults. Babble noise had a significant impact on both groups, limiting the analysis to one condition: the /da/ stimulus at +10 dB SNR for the P1 wave. P1 amplitude was greater in quiet for children compared to adults, with no stimulus effect. Children generally exhibited longer latencies. N1 latency was longer in noise, with larger amplitudes in white noise compared to quiet for both groups. P2 latency was shorter with the verbal stimulus in quiet, with larger amplitudes in children than adults. N2 latency was shorter in quiet, with no amplitude differences between the groups. Overall, noise prolonged latencies and reduced amplitudes. Different noise types had varying impacts, with the eight-talker babble noise causing more disruption. Children's auditory system responded similarly to adults but may be more susceptible to noise. This research emphasizes the need to understand noise's impact on children's auditory development, given their exposure to noisy environments, requiring further exploration of noise parameters in children.


INTRODUCTION
Listening in noise has a deleterious effect on communication.Whether in restaurants, schools or other places where people are surrounded by noise, communication is at risk of becoming arduous.Noise reduces the ability to extract and understand complete auditory information.Understanding speech in these situations relies mainly on knowledge and auditory processing abilities.Noise's impact on speech perception and listening comprehension was measured in classroom-like settings.
Children perform significantly worse in such environments than adults on listening comprehension and speech perception tasks (Klatte et al., 2010).Children are disadvantaged compared to adults, as their auditory system is still developing towards reaching its most efficient form (Hall et al., 2002).Leibold et al. (2016) identified that 8-to 10year-old children had poorer spondee word identification thresholds than adults for continuous and gaited twotalker and speech-shaped noises.These findings are aligned with those of Leibold and Buss's (2013) study comparing consonant identification in school-aged children (5 to 13 years old) and adults in two types of noise: speech-shaped and two-talker maskers.They reported that children under ten had more difficulty identifying the consonants in a speech-shaped noise than 11 to 13-year-olds and adults.In addition, all children's performances in a two-talker masker were weaker than adult performances.These results suggest that children's developing auditory system is less efficient than adults' in listening in noisy communication situations.
Moreover, other factors, such as fatigue (Key et al., 2017), attention (Thompson et al., 2017), and motivation Field (Silman et al., 2000), have been proven to modulate children's results on behavioural tasks.As opposed to behavioural tests, objective measures of the auditory system can offer help in understanding how information is processed in noise without being affected by behavioural components such as motivation or fatigue (Cacace and McFarland, 2005;Gyldenkaerne et al., 2014;Hurley, 2018).Auditory-Evoked Late Responses (ALRs) provide deeper comprehension and understanding of the physiological characteristics and developmental trajectory of the auditory system (Ponton et al., 2000).ALRs, composed of four distinct sequential positive and negative peaks (P1, N1, P2 and N2), can be measured from early childhood through adulthood (Ponton et al., 2000;Ceponiene et al., 2005;Gustafson et al., 2019).They provide an opportunity to visualize the neural representations of sound encoding at the cortical level (Picton et al., 1974).In adults, P1 is known to be generated in the secondary areas of the primary auditory cortex and is characterized by the early pre-perceptual processing of acoustic sound features (Lie´geois-Chauvel et al., 1994;Ceponiene et al., 2005).N1, generated in the supratemporal cortex (Vaughan and Ritter, 1970;Ceponiene et al., 2002), is correlated with the behavioural detection threshold (Martin et al., 1999) and the sound onset features (Na¨a¨ta¨nen, 1990).P2 sourced at the supratemporal auditory cortex with the frontal and parietal cortices (Rif et al., 1991) represents the sound content feature encoding and modulation of the sensory stimulus information (Ceponiene et al., 2005).N2 is sound content feature processing and reflects largely perception-independent auditory processing (Perrault and Picton, 1984); it is elicited by the overall temporal lobe (Ceponiene et al., 2002).As the auditory system develops throughout childhood and early adolescence, the ALRs follow a maturation pattern (Ponton et al., 2000;Wunderlich et al., 2006).When elicited in young children, generally before age 9, ALRs are dominated by two wave components, P1 and N2, as they appear very early in their development (Ponton et al., 2000).With the child's growth, the N1-P2 complex develops, and an overall reduction of the latencies and amplitudes of the two initial waves, P1 and N2, can be observed (Pang and Taylor, 2000;Ponton et al., 2000;Ceponiene et al., 2005;Wunderlich et al., 2006).Such changes in wave components' amplitudes and latencies are helpful in understanding the changes occurring with the maturation of the central auditory system (Ponton et al., 2000;Wunderlich et al., 2006).
Studied extensively in adults, noise-evoked ALRs can be modulated through specific parameters, such as (1) the types of noise, (2) the ratio between the stimuli and the noise level (signal-to-noise ratio -SNR) and (3) the types of stimulus (Whiting et al., 1998;Androulidakis and Jones, 2006;Billings et al., 2009Billings et al., , 2011;;Papesh et al., 2015;Maamor and Billings, 2017;Small et al., 2018;Gustafson et al., 2019).Those parameters are important when measuring auditory encoding in noise (Whiting et al., 1998;Androulidakis and Jones, 2006;Billings et al., 2009Billings et al., , 2011;;Papesh et al., 2015;Maamor and Billings, 2017;Small et al., 2018;Gustafson et al., 2019).However, only a few studies investigated whether and how these specific parameters impact children's auditory system.and Jones (2006) explored the effect of different noises on ALRs in adults.Results showed that wideband unmodulated masker noise had a more detrimental effect on sound (pure tone) processing than amplitude-modulated masker noise as latency was prolonged and amplitude was reduced for all ALR waves (Androulidakis and Jones, 2006).Billings et al.,'s 2011 study also compared several noise types: continuous speech spectrum noise, interrupted speech spectrum noise, and four-talker babble noise.Results in adults revealed that the most reduced P1 and N1 latencies occurred in interrupted noise conditions, and the most prolonged latency took place with the four-talker babble noise (Billings et al., 2011).Furthermore, compared to continuous noise, four-talker babble noise decreased the N1 amplitude (Billings et al., 2011).For the other types of noise, such as the interrupted speech spectrum noise and wave components, such as P2, amplitude results were generally reduced but did not always reach significance (Androulidakis and Jones, 2006;Billings et al., 2011).Maamor and Billings (2017) compared three types of noise: a continuous speech-spectrum noise, a one-talker modulated noise and a four-talker babble noise.Findings demonstrated that the four-talker babble noise, which has variation in its temporal envelope and spectrum, resulted in the poorest waveforms in all SNRs, as the amplitudes were significantly decreased in adults.The latencies were significantly increased, especially on N1 and P2 wave components.No ALR studies compared the impacts of different types of noise on children's auditory system.

Types of stimulus
The types of stimulus can impact ALR wave components' magnitude and timing.In adults, verbal stimulation with quiet and noise speech sounds generally resulted in longer latency and larger amplitude for the N1-P2 complex than tone stimulation (Tiitinen et al., 1999;Billings et al., 2011).ALRs' peak latencies have been reported to be more sensitive to noise than amplitudes.Billings et al. (2011) recorded passive ALRs in normalhearing young adults with two types of stimuli: verbal (/ba/) and nonverbal (1000 Hz pure tone).Four listening conditions were used: quiet, continuous noise, interrupted noise, and four-talker babble.The results reveal prolonged latency for P1, N1, and P2 peaks with the verbal stimulus compared to 1000 Hz pure tone in noise conditions, but there are no significant amplitude differences.No known ALRs studies compared tone and verbal evoked stimulation in noise among children.In quiet, types of stimuli were explored in children, and results revealed that the slope following the P1 waveform and the N2 peak were more negative for the syllable when compared to the pure tone (Ceponiene et al., 2005).Similarly, Koravand et al., (2012Koravand et al., ( ,2013) ) explored the difference between tone and verbal stimuli in children and reported extended latencies and lower amplitudes for the verbal compared to the tonal stimuli.
In summary, based on the mentioned studies, experimental parameter modulations affect ALRs in children and adults (Whiting et al., 1998;Androulidakis and Jones, 2006;Billings et al., 2009Billings et al., , 2011;;Papesh et al., 2015;Maamor and Billings, 2017;Small et al., 2018;Gustafson et al., 2019).Overall, babble noise produced longer latency and smaller amplitude on ALR waveforms than those in white noise in adults (Billings et al., 2011).Wave latencies and amplitudes can vary as a function of the SNR (Billings et al., 2011;Maamor and Billings, 2017;Benı´tez-Barrera et al., 2021).Only one study directly compared adults' and children's ALRs elicited by noise (Gustafson et al., 2019).At a favourable +15 dB SNR, no significant differences were found between adults and children (Gustafson et al., 2019).This lack of difference in high SNRs, combined with behavioural studies showing that children under the most challenging conditions required higher SNRs than adults to process speech in noise, (Hall et al., 2002;Klatte et al., 2010;Leibold and Buss, 2013;Leibold et al., 2016) demonstrated the need for further investigation.Verbal stimuli also produced longer latencies and smaller amplitudes when compared to tonal stimuli in children (Koravand et al., 2012(Koravand et al., , 2013)).Such results underline the need for further investigation of parameter modulation in children compared to adults to better understand the disruption that noise can inflict on children's ALRs.

Objectives
The main objective of this study was to explore the modulation of cortical sound encoding in noise in both children and adults.More specifically, this study aimed to investigate the effects of modulating the listening conditions (quiet and with noise), types of noise (eighttalker babble noise and white noise), SNR (+10, +5 and 0 dB) and types of stimulus (1000 Hz pure tone and verbal /da/), on ALRs wave components P1, N1, P2, and N2 in children when compared to adults.The present study was innovative because, with the same protocol and parameter modulation, no known study has yet evaluated if and how noise can alter immature auditory systems differently than mature auditory systems through ALRs.With multiple parameter modulations come multiple hypotheses: (1) a reduction of the amplitudes and an elongation of latencies between quiet and noise conditions as a function of the SNR and types of stimulus (Billings et al., 2011;Koravand et al., 2012Koravand et al., , 2013;;Maamor and Billings, 2017;Benı´tez-Barrera et al., 2021); (2) a more significant impact of the babble noise when compared to the white noise (Billings et al., 2011;Maamor and Billings, 2017); and (3) in line with behavioural studies (Leibold and Buss, 2013;Leibold et al., 2016), children's ALRs in both latency and amplitude will be affected to a greater extent in most unfavourable conditions than adults.Results of this study could provide more insights into the differences and similarities of the mechanisms involved in auditory processing in noise in children and adults.

EXPERIMENTAL PROCEDURES
The CHU Sainte-Justine Research Ethics Board approved this study.All adult participants and legal guardians received and signed a copy of the written consent form.Children's assent was obtained before the beginning of the study.

Participants
Fifteen young adults (mean age: 24.50 ± 1.59 years; 22 to 27 years; twelve women) and fifteen children (mean age: 9.90 ± 1.35 years; 8 to 12 years; four girls) whose first language was French participated in the study.Participants with any of the following conditions were excluded from the present study: hearing thresholds !15dBHL at any frequency from 250 to 8000 Hz, history of significant otologic problems such as chronic otitis media, taking or having a history of ototoxic medication, congenital, neurological, developmental, or metabolic disorders, a diagnosis of intellectual disability, and those born prematurely (<37 weeks).

Procedures
The research project took place at the E ´cole d'orthophonie et d'audiologie of the Universite´de Montre´al.For each participant, the data collection lasted about two hours and 30 minutes and comprised the evaluation of the peripheral auditory system and electrophysiological recordings.

Peripheral auditory evaluation
An extensive peripheral auditory evaluation was conducted in both ears with each participant: visualization of the external ear canals, and measurements of the middle ear function, the ipsilateral and contralateral stapedial reflex thresholds at four frequencies: 500, 1000, 2000 and 4000 Hz (GSI TympStar pro), as well as the distortion product otoacoustic emissions (DPOAEs) amplitude and SNR (Otodynamics ILO6) at seven frequencies: 1000, 1500, 2000, 3000, 4000, 6000 and 8000 Hz.In a soundattenuated booth, tonal audiometry (Madsen Astera 2) at eight frequencies: 250, 500, 1000, 2000, 4000 and 8000 Hz and word recognition scores in a quiet were performed in each ear.

Electrophysiological measurements
Recording.The measurements were recorded with the software PyCorder (BrainVision, LLC) on a computer connected to the BrainVision actiCHAMP TM amplifier (BrainVision LLC).A cap with 64 Ag/AgCl electrodes was positioned on each participant's head according to the 10-20 system (Jasper, 1958;Klem, 1999).For each active electrode, an impedance of no more than 40 kX was obtained (Mathewson et al., 2017).The Oz electrode was selected as the online reference electrode.Vertical eye movements were recorded by electrodes located under and above the eyes.Horizontal eye movement was recorded with the adjacent frontotemporal electrodes.Participants were instructed to avoid movements as much as possible, ignore the sound stimulation when presented and watch a chosen muted movie with subtitles.Breaks were offered between blocks as needed by the participant.
Binaural insert earphones (E-A-RTONE 3A) were used to transmit stimuli, comprising of 2800 trials in two different conditions: 400 trials in silence and 2400 in noise.Auditory stimuli were presented through a computer with the PresentationÒ software (Neurobehavioral Systems Inc.) via an audiometer (Madsen, Midimate 602).The auditory stimulation was separated into fourteen blocks of 200 presentations; each had an approximate duration of four minutes and 15 seconds.The Pratt synthesizer Field created a syllable /da/ and a 1000 Hz pure tone for the experiment (Boersma and Weenink, 1992).For both stimuli, the duration was 170 msec with a rise and a fall time of 10 msec.Both stimuli were calibrated with a sonometer (Larson Davis Laboratories model 800B) to an output of 70 dBA.The interstimulus interval was a random jitter with a duration between 1130 and 1330 msec.The acquisition window (epoch) was set from À200 to 800 msec.Two types of noise were employed: white noise generated with the AudacityÒ software (Audacity Team, 1999) and an eight-talker Canadian French babble noise (Lagace´, 2014).Calibrated with the sonometer, both noise types were elicited at three SNRs: +10, +5 and 0 dB.The order of the fourteen stimuli conditions was randomized between participants.
Pre-processing.The EEG signal was analyzed in MATLAB version 2022b (The MathWorks Inc., 2022) with Fieldtrip (Oostenveld et al., 2010).EEG segments with an amplitude exceeding ±100 lV were not included in the average data to exclude artifacts.After removing these EEG segments, the remaining segments were corrected with the Independent Component Analysis (ICA) method (Makeig et al., 1993) to extract eye movements from the signal.EEG data was filtered with a lowpass filter of 30 Hz and a baseline window of À200 to 0 msec and then averaged over each condition.
ALR components (P1, N1, P2, N2) were identified by two independent judges according to a time window that varied according to the listening conditions.The identification was based on two rules: (1) the most negative or positive wave component in the predetermined time windows based on previous studies (Gustafson et al., 2019;Benı´tez-Barrera et al., 2021) and on grand average waveforms -P1: children 40-290 ms, adults: 50-170 ms; N1: children 100-300 ms and adults: 100-250 ms; P2: children 160-360 ms, adults: 150-350 ms; N2: children 250-450 ms and adults: 230-460 ms, and (2) challenging identification was completed according to related ALRs (same stimuli and noise type).If discrepancies occurred, a discussion between judges took place to reach an agreement.The interjudge validity was 88% and 85% for the total adults' and children's wave components, respectively.

Statistical analysis
For some experimental conditions, not all ALR waves were identifiable.After wave identification, wave components for each of the fourteen experimental conditions with more than 3 (>20%) participants missing in either the children's or the adults' groups were excluded from the analysis.In the conditions where three or fewer participants did not have identifiable wave components, the group average was calculated to provide a value to the missing wave components.Repeated measures analyses of variance (ANOVA) were performed with IBM SPSS Statistics (Version 27).A three-level analysis was conducted by controlling for the effect of the independent factors of Group (children, adults), Listening condition (quiet, noise at specific SNRs), and Stimulus (syllable /da/, 1000 Hz pure tone) on the latency and amplitude of the ALRs wave components.The threshold of significance was delineated at p < 0.05.For post-hoc t-test analyses, a Bonferroni correction factor was applied.

RESULTS
The percentages of the presence of wave for all fourteen conditions in both groups were compiled in Table 1.Three to eight conditions could not be part of the analysis because wave components were absent for more than 20% of the participants in either group.In total, seven conditions for P1, five conditions for N1, and six conditions for P2 and N2 were included in the statistical analysis.These conditions mainly were quiet and white noise.Only one condition for the P1 wave component could be analyzed in babble noise: /da/ at +10 dB SNR.
After wave identification and elimination of the conditions that did not meet the criteria for further analysis, repeated-measures ANOVAs (see Table 2) and, when necessary, t-tests were performed on the analyzable conditions' latencies (see Fig. 1) and amplitudes (see Fig. 2).

P1
ANOVAs were performed on P1 latency (see Fig. 1(A)) and amplitude (see Fig. 2(A)) results according to a sequence of three ANOVAs for each dependent variable.A first ANOVA (1) was done with the syllable / da/ and the 1000 Hz pure tone stimuli in quiet and white noise at +10 SNR for the two groups.A second ANOVA (2) was performed with results obtained in the two groups with the syllable stimulus only in four listening conditions: quiet and white noise at +10, +5 and 0 SNRs.Finally, a third ANOVA (3) was done with the results of /da/ in quiet, babble, and white noise at +10 dB SNR.

N1
Two ANOVAs were performed.One ANOVA (1) was performed with the results with the syllable /da/ and the 1000 Hz pure tone stimulation in quiet for the two groups.A second ANOVA (2) was done with the results obtained for the two groups with /da/ in quiet and in white noise at +10 and +5 SNR.Both ANOVAs were applied for the two dependent variables, N1 latency (see Fig. 1(B)) and amplitude (see Fig. 2(B)).
Amplitude.In both ANOVAs, in the quiet condition with /da/ and pure tone stimulation for the two groups and /da/ in quiet and in white noise at +10 and +5 SNR for the two groups, results showed a significant effect of Group (1) [F(1,28) = 14.833, p < 0.001, g 2 = 0.346], (2) [ F(1,28) = 8.905, p = 0.006, g 2 = 0.241] on N1 amplitude, with greater values in the group of adults than in the group of children.With /da/ in quiet and in white noise at +10 and +5 SNR for the two groups, a significant effect of Listening condition [F(2,56) = 5.871, p = 0.005, g 2 = 0.173] was observed.T-test results with Bonferroni correction revealed that N1 amplitude was significantly lower in quiet than in white noise at +10 dB SNR [T(29) = 2.820, p = 0.009] and at +5 dB SNR [T(29) = 2.960, p = 0.006].There was no significant difference between +10 and +5 dB SNR [T( 29

P2
An ANOVA was carried out with the results obtained for the two groups with the pure tone and with the syllable / da/, in quiet and in white noise at +10 and +5 SNR, for P2 latency (see Fig. 1(C)) and then for P2 amplitude (see Fig. 2

N2
For N2 latency (see Fig. 1(D)) and amplitude (see Fig. 2 (D)), one ANOVA was done from the results obtained in the two groups with the 1000 Hz pure tone and the syllable /da/ in quiet and in white noise at +10 and +5 SNR.

DISCUSSION
The primary objective of this study was to understand the effect of types of noise, SNRs, and types of stimulus on ALRs' four-wave components -P1, N1, P2, and N2 -in children and adults.The main findings of this study include the marked differences in configuration between children's and adults' ALRs with the modulation of these three parameters.An average of 64% of identifiable waves in children compared to 78% in adults is in line with previous behavioural research indicating children are more affected by noise than adults (Hall et al., 2002;Klatte et al., 2010;Leibold and Buss, 2013;Leibold et al., 2016), but not with the electrophysiological study of Gustafson et al. (2019).They did not report differences between adults and children on ALRs' results in noise at +15 dB SNR.This could be explained by the testing protocol, as their recording conditions were more favourable than those of the current study.In the present study, smaller SNRs were used, and results showed that adults and children had different neurophysiological responses in noise as children's most resistant waves were P1 and N2.However, adults were P1 and N1 in babble noise and P1, N1 and P2 in white noise.N2 was more inhibited and less prominent in the quiet condition in adults than in children, which may explain its extinction in noise (Picton et al., 1974).The immaturity of the structures of the central auditory system may explain the difficulties of correctly processing information when overloaded with noise.

Types of noise
The type of noise did not have the same effect on the ALRs components.White noise was the most favourable noise condition.All three white noise conditions involving the verbal stimulus and two conditions involving the tone could be analyzed.Results are in line with the literature as, overall, the latency of the waveforms was significantly elongated, and the amplitude was considerably reduced in white noise than in quiet (Cunningham et al., 2001;Hayes et al., 2003;Whiting et al., 1998;Androulidakis and Jones, 2006;Kaplan-Neeman et al., 2006;Anderson et al., 2010;Billings et al., 2007Billings et al., , 2009Billings et al., , 2011Billings et al., , 2013;;Billings and Grush, 2016;Parbery-Clark et al., 2011;Almeqbel and McMahon, 2015;Papesh et al., 2015;Maamor and Billings, 2017;Gustafson et al., 2019;Benı´tez-Barrera et al., 2021).The eight-talker babble noise affected wave morphology much more than the white noise (Figs. 3 and  4).These findings mirrored Benı´tez-Barrera et al., 2021 study results.They found heavily altered waveform components with a four-talker babble noise at the +15 and +10 dB SNR.These results confirmed that babble noise is more disruptive than white noise.In the current study, P1 and N1 were barely identified, even in adults with this type of noise at the tested SNRs, except for P1 with /da/ at +10 SNR.P2 and N2 could not be identified in any babble conditions in at least twelve participants in both groups to be analyzed.These findings suggest that the structures of the central auditory system generating these waveforms were more affected by babble noise than those leading to the formation of the P1 waveform.This could point out that neural resources were allocated primarily to the structures engaged in the basic processing of sound features.
In contrast, results from the studies of Billings et al. (2011) and Maamor and Billings (2017) showed that most of their recorded evoked potentials in babble noise had identifiable wave components.Inconsistencies between the studies could be due to protocol differences as these two studies utilized a babble noise of four talkers, and the current study used a babble noise with eight talkers.Maamor and Billings (2017) identified that while they found similarities between the effects of a speech-shaped noise and the babble noise, the variability and the more remarkable resemblance of the babble noise to the stimulation resulted in poorer waveforms.Moreover, results of behavioural studies have shown that a masking noise composed of eight talkers has a high level of informational and energetic masking.
In contrast, the masking noise of four speakers has a more informational than energetic masking effect (Simpson and Cooke, 2005;Rosen et al., 2013), making the 8-talker babble the more efficient masker.Further investigation is required into the distinctions between the effect of informational/energetic masking and the spectral/temporal similarities between the babble noise and stimulation on electrophysiological recordings.Nonetheless, previous results and those of the present electrophysiological study support the idea that more ecologically relevant noise (closer to the stimulation) can have a more substantial destructive impact on auditory encoding when compared to continuous white noise.

SNR
In previous studies, the amplitude and latency of ALR components generally varied as a function of the SNR level (Whiting et al., 1998;Billings et al., 2011;Maamor and Billings, 2017;Small et al., 2018;Benı´tez-Barrera et al., 2021).This trend in ALR parameters was seen in the present study when the stimuli were masked by white noise, showing a significant reduction of the amplitude and a significant elongation of the latencies with decreasing SNR.Such an effect could not be seen in the listening conditions measured in babble noise.As previously stated, most babble noise conditions could not be analyzed due to a lack of identifiable waveforms.Therefore, observing the SNR effect on the amplitude and latency of the ALR components with the babble noise was almost impossible in both groups.However, children's central auditory system seemed more affected in the most challenging SNRs as the number of identified wave components was lower in children compared to adults (see Table 1).

Types of stimulus
ALR components were influenced by the types of stimulus.The verbal stimulus /da/ was in general more noise-resistant than the 1000 Hz pure tone for both groups, producing the highest percentage of identifiable waves in both types of noise for all SNRs (see Table 1).In the listening conditions where the two types of stimulus were identifiable, P1, P2 and N2 were generally more prominent, and N2 was significantly longer for the verbal condition.This has been repeated in studies as tonal stimuli responses seem more mature than verbal responses, explaining the significantly larger responses (Ponton et al., 2000;Ceponiene et al., 2005) and usually longer ALRs (Billings et al., 2011) for verbal stimulation than tonal stimulation in children.Children had significantly greater waves than adults in the tonal condition but an absent P2 wave component at an SNR of 0 dB.N2 also demonstrated such an effect for the tonal stimuli, where children had significantly greater amplitudes than adults for both the quiet and the +10 dB SNR conditions but not significantly different for both groups at the +5 dB SNR condition.This effect was not recorded in the verbal conditions, as children's waves were significantly larger in all conditions.These results are similar to Billings et al.'s (2011) findings in adults, where the interaction between the spectral characteristics of the types of stimulus and the type of noise influenced the wave parameters.In the current study, the group (adults and children) and the types of stimulus (verbal and tonal) both interacted with the noise.
Unexpectedly, N1 amplitude was significantly more negative in white noise conditions (+10, +5 dB SNR) than quiet for the verbal stimulus for both groups.Such an increased negativity in noise was not recorded for the tonal stimulus.Results of a series of studies by Billings et al. (2007Billings et al. ( , 2009Billings et al. ( , 2011Billings et al. ( , 2013Billings et al. ( , 2016) ) and Maamor and Billings, 2017 did not show such an effect using verbal stimuli and similar noise.A few studies reported this phenomenon sparingly (Papesh et al., 2015;Gustafson et al., 2019).For example, Gustafson et al. (2019) found similar results where N1 was more negative in the noise condition in 7 to 10-year-old participants and not with older children or adults.They explained these findings by the incomplete maturation of the ALR components in quiet for these children (Gustafson et al., 2019).Such an explanation is not supported by the current study or by the results of Papesh et al. (2015), as both show larger N1 amplitude in noise than in quiet listening conditions with the verbal stimulus in adults.Papesh et al. (2015) offered another perspective on this phenomenon by proposing that the enhancement of N1 amplitude could be associated, in part, with the salience of the verbal stimulus with respect to the noise level.Such an effect was not possibly reproduced by the 1000 Hz pure tone in the white noise because this stimulus was easier to mask than the /da/.Its frequency is included in the combination of the frequencies composing the white noise, thus reducing the salience of the stimulus and, in turn, the N1 amplitude.Further research with different paradigms and parameter modulation is needed to understand such an effect, as its presence is not systematic.
In conclusion, the current study examined how parameter modulations with different types of noise, SNRs and types of stimulus can inform on the similarities and differences between adults' and children's sound encoding at a cortical level.Consistent with the reviewed literature, noise generally elongates latencies and reduces amplitudes in all noise conditions compared to quiet conditions.Results also showed that tones and speech stimuli are not equally noise-resistant and that an eight-talker babble noise was more disruptive than a white noise.The results revealed that the auditory system of children had similar responses but, due to its immaturity, seemed more impacted in processing auditory information in noise than adults.Understanding how children's auditory system are affected by noise is fundamental, as they spend a significant amount of time in noisy situations.Future research is needed to correctly identify the interactions between different noise parameter modulations in children and how they affect their immature auditory system.

Limitations of the study
While the current study provided valuable insights into the differences in auditory late responses (ALRs) between children and adults across diverse listening conditions, it is essential to consider certain limitations.One limitation of this study was sex, as it was not well-balanced between groups.There were a majority of women in the adult group but a minority of girls in the children's group.Such a difference could have influenced the results, as men and women can slightly differ in their auditory system function (Tobias, 1965;Roche et al., 1978;Morlet et al., 1995).The sample size, while sufficient for the optimal quiet conditions, was small, with only 15 participants in each group.Children generally had less identifiable wave components than adults, especially under the most adverse listening conditions.Statistical analysis could not include multiple conditions, almost completely removing the six conditions related to the babble noise as the wave components (P1-N1-P2-N2) were unidentifiable in many participants.The statistical analysis of the wave components in babble noise, when compared to the white noise, could have helped in better quantifying the differences between adults and children.In the group by condition interactions, there was a greater exclusion of P2 and N2 wave components than P1 and N1 wave components.Taken together with the presence of interactions primarily in conditions with higher SNRs and in the absence of such effects in lower SNR conditions, it suggests the possibility of reduced statistical power in detecting group differences under more challenging listening conditions.Increasing participants in both groups could help to resolve this limitation.There were also limitations with the choice of parameter modulated.The eight-talker babble noise had a significant adverse effect on the wave components.The most challenging conditions, such as the babble noise with 0 dB SNR, produced little to no wave components in both groups.A future study with more favourable SNRs and a comparison with a fourtalker babble noise and a spectral-matched continuous noise would help better understand the destructive effect of the eight-talker babble noise and better evaluate its impact on children's waves without the floor effect.The lack of behavioural measures in noise was also a limitation of this study, as it prevented the assessment of the relationship between the ALRs and those of behavioural results in noise with the same parameters.

3Fig. 1 .
Fig. 1.Average latency (and standard error) of P1(a), N1(b), P2 (c) and N2 (d) ALR's waveforms obtained in 15 adults (A) and 15 children (C) in quiet and in white noise at a signal-to-noise ratio of +10, +5 or 0 dB with the 1000 Hz pure tone and the syllable /da/.

Fig. 2 .
Fig. 2. Average amplitude (and standard error) of P1(a), N1(b), P2 (c) and N2 (d) ALR's waveforms obtained in 15 adults (A) and 15 children (C) in quiet and in white noise at a signal-to-noise ratio of +10, +5 or 0 dB with the 1000 Hz pure tone and the syllable /da/.

Fig. 3 .
Fig. 3. Grand average (Cz) in quiet and babble noise at +10, +5, 0 dB SNRs for the 15 adults with the verbal /da/ (A) and 1000 Hz pure tone (B) and the 15 children with the verbal /da/ (C) and 1000 Hz pure tone (D).

Fig. 4 .
Fig. 4. Grand average (Cz) in quiet and white noise at +10, +5, 0 dB SNRs for the 15 adults with the verbal /da/ (A) and 1000 Hz pure tone (B) and the 15 children with the verbal /da/ (C) and 1000 Hz pure tone (D).

Table 1 .
Percentage of the presence of each wave component (P1, N1, P2 and N2) for each group (adults and children) in all fourteen conditions

Table 2 .
Repeated measures ANOVA results for each wave component (P1, N1, P2 and N2) for both dependant variables (latency and amplitude) in all analyzable conditions