2022-04-12, 17:30–18:00 (Europe/Vienna), Room 1
The 'Accent Bias in Britain' (ABB) project (Sharma et al. 2019) examines current attitudes to accents in the UK, and investigates whether unconscious accent bias plays a role in how job candidates are evaluated. Our attitudinal survey, which asked participants to rate written accent labels for perceived prestige and pleasantness, shows little change over the past half-century in terms of which accents are rated most and least positively, while the links between accent standardness and 'hireability' judgements revealed by listening experiments were generally much as predicted among respondents from the general population. More optimistically, it appears that recruiters to professional occupations – in the commercial law sector, in the ABB case – can suppress unconscious biases towards/against different accents, and appraise different-accented job candidates impartially and objectively.
In our study we asked listeners to rate five accents of British English: the standard Received Pronunciation (RP), and the non-standard forms Estuary English, Multicultural London English, ‘General Northern’ English, and Urban West Yorkshire English. The degree of non-standardness of accents is of course variable: some non-standard accents are closer to RP than are others. The present paper focusses on a comparison of judgements made by human listeners using a combination of auditory-based methods with inter-sample/-speaker distance scores generated by an automatic speaker recognition system, Phonexia’s Voice Inspector (https://www.phonexia.com/en/use-case/audio-forensics-software/).
For the auditory-based distance measures, a selection of 'Dialect Density Metrics' (DDMs) were used. These are of three types: (1) the overall number of accent features in an utterance, expressed as a raw count or as a proportion of all the key sites of divergence between RP and the non-standard accent in question; (2) a feature-focussed 'salience' measure quantifying the frequencies of variants of individual phonological variables before combining them; and (3) a count of the number of non-standard features within a moving 3-second window, shifted rightward by 1-second increments. Furthermore, each type of metric comes in two forms, unweighted and weighted: in the first, features are either simply present or absent, while in the second they are ranked on a scale on which 0 means ‘standard’, 1 is ‘non-standard but widespread’, and 2 is ‘non-standard and localisable’.
The six permutations of the DDM were applied to a subset of 20 mock job interview answers used for the listening experiments described earlier. These scores were correlated with the output of the Voice Inspector tool, which evaluated the resemblance of each audio sample to every other sample (380 pairwise comparisons in all) by computing sample and speaker models based on mel-frequency cepstral coefficients (MFCCs) derived from the acoustic spectrum represented in each sample.
We report on how closely these human- and computer-based distance metrics correlate with one another, with the aim of attempting to establish which of the DDM approaches is optimal (i.e. least subjective).
Sharma, Devyani, Erez Levon, Dominic Watt, Yang Ye & Amanda Cardoso. 2019. Methods for the study of accent bias and access to elite professions. Journal of Language and Discrimination 3(2). 150-172. https://doi.org/10.1558/jld.39979.