posh

Candidates at job interviews expect to be evaluated on their experience, conduct, and ideas, but a new study by Yale researchers provides evidence that interviewees are judged based on their social status seconds after they start to speak.
The study, to be published in the Proceedings of the National Academy of Sciences, demonstrates that people can accurately assess a stranger’s socioeconomic position — defined by their income, education, and occupation status — based on brief speech patterns and shows that these snap perceptions influence hiring managers in ways that favor job applicants from higher social classes.
Our study shows that even during the briefest interactions, a person’s speech patterns shape the way people perceive them, including assessing their competence and fitness for a job,” said Michael Kraus, assistant professor of organizational behavior at the Yale School of Management. “While most hiring managers would deny that a job candidate’s social class matters, in reality, the socioeconomic position of an applicant or their parents is being assessed within the first seconds they speak — a circumstance that limits economic mobility and perpetuates inequality.”
The researchers based their findings on five separate studies. The first four examined the extent that people accurately perceive social class based on a few seconds of speech. They found that reciting seven random words is sufficient to allow people to discern the speaker’s social class with above-chance accuracy. They discovered that speech adhering to subjective standards for English as well as digital standards — i.e. the voices used in tech products like the Amazon Alexa or Google Assistant — is associated with both actual and perceived higher social class. The researchers also showed that pronunciation cues in an individual’s speech communicate their social status more accurately than the content of their speech.
The fifth study examined how these speech cues influence hiring. Twenty prospective job candidates from varied current and childhood socioeconomic backgrounds were recruited from the New Haven community to interview for an entry-level lab manager position at Yale. Prior to sitting for a formal job interview, the candidates each recorded a conversation in which they were asked to briefly describe themselves. A sample of 274 individuals with hiring experience either listened to the audio or read transcripts of the recordings. The hiring managers were asked to assess the candidates’ professional qualities, starting salary, signing bonus, and perceived social class based solely on the brief pre-interview discussion without reviewing the applicants’ job interview responses or resumes.  
The hiring managers who listened to the audio recordings were more likely to accurately assess socioeconomic status than those who read transcripts, according to the study. Devoid of any information about the candidates’ actual qualifications, the hiring managers judged the candidates from higher social classes as more likely to be competent for the job, and a better fit for it than the applicants from lower social classes. Moreover, they assigned the applicants from higher social classes more lucrative salaries and signing bonuses than the candidates with lower social status.
We rarely talk explicitly about social class, and yet, people with hiring experience infer competence and fitness based on socioeconomic position estimated from a few second of an applicant’s speech,” Kraus said. “If we want to move to a more equitable society, then we must contend with these ingrained psychological processes that drive our early impressions of others. Despite what these hiring tendencies may suggest, talent is not found solely among those born to rich or well-educated families. Policies that actively recruit candidates from all levels of status in society are best positioned to match opportunities to the people best suited for them.”We all know that it is important to make a good first impression, especially in a job interview. Still, it is slightly disturbing to see scientific proof that (a) that impression is formed by the time we have spoken just seven words and (b) we are doomed unless those words come out in an accent like Jacob Rees-Mogg’s. Not that I imagine the arch-Brexiter has been to many job interviews (though who knows what December might hold). According to new research from Yale University, when we hear someone speak we form near-instantaneous conclusions about their social class. Michael Kraus, assistant professor of organisational behaviour at the Yale School of Management, reported that, even during brief interactions, speech patterns shape our perceptions of competence. And people are able to judge social class with reliable accuracy merely from hearing seven random words. “While most hiring managers would deny that a job candidate’s social class matters, in reality, the socio-economic position of an applicant or their parents is being assessed within the first seconds they speak,” he explained. A chilling thought. No one actually intends to take their parents into a job interview, but it seems that they sneak in anyway. How should we counter this? Make everyone go the full Marcel Marceau? “Tell us about your previous experience using mime, Play-Doh or the medium of interpretative dance.” In one part of the study, 274 managers with hiring experience compared audio recordings and transcripts from 20 candidates without knowing any details about the applicants’ qualifications. The managers were far more likely to correctly guess a candidate’s socio-economic background accurately if they heard their voice than if they read their written words. Worse: the higher an applicant’s perceived social class, the more likely they were to judge them a good fit and award them a higher salary. Ouch. Imagine that — free money just for sounding posh. No wonder my Mancunian grandmother used to try to make me repeat “How now, brown cow” in Margaret Thatcher’s plummy tones. The proof of the power of these biases is mounting not only because of studies like these but also because of advances in artificial intelligence and digital metrics. I spent this week at the Professional Speechwriters’ Association’s annual conference in Washington, where Noah Zandan, head of Quantified Communications, gave a fascinating — and slightly terrifying — presentation. His team has pioneered human-trained AI technology that teaches machines to measure the impact of how we communicate, using over 1,400 metrics (voice, accent, gestures, choice of words). His research suggests a more generous 15 seconds to make a first impression. Worse, though, for the speechwriters (who were turning white by this point) the machine judges that content only counts for 11 per cent of our impact when we talk. Passion, expertise, voice and presence are all twice as important. Conclusion: no one cares much what you say — they care how you look and how you sound. I was judged to have “cheated” in my speech because I have a British accent. Apparently this gives you a “perceived intelligence” bounce with American audiences. If only they knew. So how do we get around this bias? We could look to the many observable exceptions to the rule. In the 21st century, some non-posh voices stand out as more authentic and distinctive. Witness the lucrative careers of television personalities Danny Dyer, Stacey Dooley, or the cast of Love Island. It’s wise also to remember that job interviews are a two-way street. You are interviewing the interviewers too. Would you really want to work for a company stupid enough to hire someone just because they sound as if they have a plum stuck up their bottom? 

Comments

Popular posts from this blog

ft

karpatkey