这是用户在 2024-8-28 16:29 为 https://app.immersivetranslate.com/word/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

BMH5102 Talent Assessment & Selection
BMH5102 人才评估与选拔

Group Study & Presentation for Lesson 4
小组学习与课程演示4

Artificial Intelligence and Applicant Screening
人工智能与申请人筛选

Please read the following writeups about the use of artificial intelligence in applicant screening and then answer the questions that follow from the perspective of talent assessment and selection.
阅读以下关于人工智能在申请人筛选中使用的写作,然后回答后面的问题,从人才评估和选拔的角度

Report (A): How Big Law Firms Are Using AI to Cut Down on Hiring Biases
报告 (A): 大型法律事务所如何利用人工智能减少招聘偏见

More and more law firms are turning to behavioural psychology and artificial intelligence in their hiring and recruiting processes, including Shearman & Sterling, Cadwalader, Wickersham & Taft, and Willkie Farr & Gallagher.
越来越多的律师事务所正在将行为心理学和人工智能应用于招聘和录用流程,包括谢尔曼与斯特林、卡德瓦尔德、威克舍姆与塔夫特,以及威尔基·法尔与加拉赫。

The practice is meant to help bring some empirical science into the hiring process and bring on more gender and racially diverse attorneys but the process is not without its pitfalls, which can include reinforcing biases themselves.
个实践旨在帮助一些实证科学引入招聘过程,并且引入更多性别种族多样化的律师,但这个过程并非没有陷阱,这些陷阱可能包括强化偏见本身。

Companies such as Suited, Thine and Parker Analytics are using machine learning, artificial intelligence, and behavioural psychology to create better-informed systems for recruiting attorneys.
像 Suited、Thine 和 Parker Analytics 这样的公司正在利用机器学习、人工智能和行为心理学来创建更为明智的律师招聘系统。

Firms are turning to these companies to recruit more diverse classes of attorneys who, theoretically, are more likely to become successful lawyers down the road.
公司正在转向这些公司招聘更多多样化律师班级他们理论上有可能成为成功的律师

But how will law firms determine successful outcomes, and how will these companies avoid bias in the algorithm itself?
但律师事务所将如何确定成功的结果,这些公司又将如何避免算法本身的偏见?

Take, for example, Amazon, which developed an AI tool to screen resumes in 2014 that, according to a 2018 Reuters report, actually led to fewer women making it through the screening process.
以亚马逊为例,该公司在 2014 年开发了一种人工智能工具来筛选简历,2018 年路透社的报道显示,这实际上导致通过筛选过程的女性人数减少。

The company ended up scrapping the tool.
家公司最终放弃了这个工具。

Typically, firms that want to employ AI and behavioural psychology draw their preferred traits - whether that be team-oriented or adaptability - from successful attorneys, and sometimes business professionals, in their firms.
通常,希望采用人工智能和行为心理学的公司会从其公司中成功的律师,有时也包括商业专业人士那里汲取他们所偏好的特质——无论是团队合作精神还是适应能力。

But the legal industry as a whole has long struggled with hiring women and people of colour, so simply drawing from their own ranks leads to biases.
但是法律行业整体长期以来一直招聘女性有色人种方面挣扎,因此仅仅他们自己的队伍中抽调人选会导致偏见。

"We go into every model that we're building for each organization assuming that there's bias in the data. The algorithms themselves aren't biased , it's the data that they're trained on where there could be bias," said Matthew Spencer, CEO of Suited, an AI company that serves firms such as Cadwalader and Wilson Sonsini Goodrich & Rosati.
“我们进入每一个模型假设我们每个组织构建模型存在偏见这些算法本身并不是有偏见的而是它们训练的数据可能存在偏见,”马修·斯宾塞说,他是Suited首席执行官这是一家CadwaladerWilsonSonsiniGoodrichRosati等公司提供服务的人工智能公司

Companies such as Suited and Thine try to address these potential biases by running law firms' behavioural matrix against a large data set of attorneys to weed out what the companies call "adverse impacts."
公司SuitedThine试图解决这些潜在偏见,通过大量律师行为矩阵进行分析,以剔除公司所称"不利影响"

Thine CEO and co-founder Sang Lee said Thine's database draws upon thousands of attorneys of all walks of life.
Thine首席执行官联合创始人表示Thine数据库汇集了成千上万律师来自各行各业的人士

And when Thine is developing a personality matrix for its clients, it interviews the firm's attorneys and also its business professionals such as chief talent officers and internal recruitment staff.
您的正在开发一个个性矩阵客户,采访公司的律师以及商业专业人士如首席人才官和内部招聘人员。

"We're very intentional about the stakeholders we want designing the traits," Lee said.
“我们非常有意识地选择希望参与设计特征的利益相关者,”李说。

"Instead of thinking about the people that are already successful and trying to emulate that, we bring people in who are successful but are also profoundly diverse - meaning not just the partners."
“与其考虑那些已经成功的人并试图模仿他们,不如引入那些成功但也极具多样性的人 - 这意味着不仅仅是合作伙伴。”

Shearman & Sterling, a client of Thine, was well aware of the potential for bias in tech-assisted recruiting, and had Thine leverage its database to back-test any traits that may negatively affect minority candidates or women.
希尔曼&斯特林作为您的客户非常清楚技术辅助招聘中潜在偏见并且利用您的数据库进行回测任何可能少数族裔候选人女性产生负面影响的特征

"The characteristics we look for are things like, team-oriented or goal-oriented," said John Nathanson, head of Shearman & Sterling's hiring committee.
“我们寻找的特征包括团队导向或目标导向,”谢尔曼与斯特林的招聘委员会负责人约翰·纳森森说。

"The concepts seem very broad and neutral but we worry, obviously, that we are human beings, and we wanted to back-test it. We had to get and did get confidence and comfort that they won't be inherently biased ."
"这些概念看起来非常广泛中立,但是我们显然担心我们人类而且我们想要进行回测我们必须获得并且确实获得信心安慰以确保它们不会本质上有偏见"

Still, initial testing and programming have shown successful results in the areas of diversity.
尽管如此,初步测试编程已经显示出成功的结果多样性领域。

"Broadly speaking, these firms are hiring the most diverse classes they've ever hired from the broadest range of law schools that they've ever hired them from," Spencer said.
“广泛来说,这些公司正在招聘他们有史以来多样化班级,他们最广泛的法学院招聘,这是他们有史以来招聘的,”斯宾塞说

Firms that have long used this form of interviewing attest to its success.
长期使用这种面试形式的公司证明了它的成功。

According to Andy Colón, chief talent officer at Thompson Hine, the number of Black attorneys at the firm has tripled since it instituted various changes to its interviewing process in 2017.
根据汤普森·海恩的首席人才官安迪·科隆的说法,自 2017 年该公司对面试流程进行各种改进以来,公司的黑人律师数量已增加了三倍。

The number of women and LGTBQ+ attorneys have similarly jumped.
女性LGTBQ+律师数量同样激增。

Additionally, legal hiring AI is still in its infancy.
此外,法律招聘人工智能仍然处于起步阶段。

Few, if any, firms that employ these processes have done so for long enough to reliably measure whether or not the method is successful in bringing in attorneys who will eventually move up to leadership roles in their firms.
很少有公司使用这些流程,甚至可以说没有公司使用得足够长时间,以可靠地衡量该方法是否成功吸引了最终会晋升为公司领导角色的律师。

And there are many variables beyond recruiting that can complicate whether or not a lawyer is successful, such as a firm's capacity to properly train and retain these lawyers.
许多变量超出招聘范围可能复杂化律师是否成功,例如一家公司的能力适当培训留住这些律师。

Regardless, many firms that employ AI see it as simply another input in conjunction with the traditional process of Big Law hiring.
无论如何,许多公司人工智能视为仅仅另一种输入传统的大律师事务所招聘过程结合使用。

Shearman & Sterling only considers the results of a candidate's test at the end, when the hiring committee is considering who will receive an offer.
希尔曼与斯特林仅在招聘委员会考虑谁将获得录用时,才会考虑候选人测试的结果。

Results aren't shared with interviewers in the OCI (on-campus interviewing) or call back process.
结果不会在 OCI(校园面试)或回访过程中与面试官分享。

Others, like Thompson Hine, find it best to use the results as a talking point during the interviewing process.
其他人,汤普森海因发现最好的是面试过程中结果作为谈话切入点

"We want to have people do interviewing based not on their preconceived notions of behaviours because confirmation bias can seep in," said John Nathanson, head of Shearman & Sterling's hiring committee.
“我们希望人们在面试时不基于他们对行为的先入之见,因为确认偏误可能会渗入,”谢尔曼与斯特林的招聘委员会负责人约翰·纳森森说。

"We don't want [the interviewer] to say, 'this person scored low on this.' We want it to be just one part of the file."
"我们不想[面试官]说,'这个这个方面得分很低。我们希望只是文件的一部分。"

And in the end, the predominant method of interviewing - focusing on class and school rank and summer work experience - is so laden with bias, critics say, that introducing this sort of scientific rigor can only help.
最终,批评者表示,主要的面试方法——关注班级和学校排名以及暑期工作经验——充满了偏见,引入这种科学严谨性只会有所帮助。

"Rank of their law school, GPA - not only are those not predictive of long-term performance, they are biased in many ways," Spencer said.
“他们的法学院排名、GPA——这些不仅不能预测长期表现,而且在许多方面存在偏见,”斯宾塞说。

"You're never going to get 100% solution, but what it is doing is driving substantially higher success rates and substantially less bias."
“你永远不会得到100%的解决方案,所做的大幅提高成功率,并大幅减少偏见。”

Report (B): How Racist Robots Are Being Used in Recruitment
报告 (B): HowRacistRobotsAreBeingUsedinRecruitment

Some systems have been shown to associate white names with being more qualified and to weed out applicants who went to women’s colleges
一些系统证明白人姓名有资格联系,并排除那些申请者,她们曾就读于女子学院

Since graduating from a US university four years ago, Kevin Carballo has lost count of the number of times he has applied for a job only to receive a swift, automated rejection email - sometimes just hours after applying.
自从四年前美国大学毕业以来,凯文·卡巴洛已经记不清申请工作的次数了,每次收到一封快速的、自动化的拒绝邮件 - 有时申请后仅仅几个小时。

Like many job seekers around the world, Mr Carballo’s applications are increasingly being screened by algorithms built to automatically flag attractive applicants to hiring managers.
许多求职者一样世界各地,卡巴略先生申请越来越多算法筛选,这些算法旨在自动标记吸引人的申请者招聘经理。

“There’s no way to apply for a job these days without being analysed by some sort of automated system,” said Mr Carballo, 27, who is Latino and the first member of his family to go to university.
“没有办法这些日子申请工作而不某种自动化系统分析,”卡巴洛先生(27 岁)说,他拉丁裔,也是他家里第一个上大学的人。

“It feels like shooting in the dark while being blindfolded - there’s just no way for me to tell my full story when a machine is assessing me,” Mr Carballo, who hoped to get work experience at a law firm before applying to law school, told the Thomson Reuters Foundation by phone.
“这感觉就像黑暗中射击蒙住眼睛- 根本无法机器评估我的时候讲述我的完整故事,”卡巴洛先生说,希望申请法学院之前一家律师事务所获得工作经验,告诉汤森路透基金会 电话。

From Artificial Intelligence (AI) programmes that assess an applicant’s facial expressions during a video interview, to resume screening platforms predicting job performance, the AI recruitment industry is valued at more than $500 million (£350 million).
来自人工智能(AI)程序评估申请者在视频面试中的面部表情,简历筛选平台预测工作表现,人工智能招聘行业价值超过$500百万(£350百万)。

“They are proliferating, they are fast, they are relatively cheap - they are everywhere,” said Alex Enger, a fellow at the Brookings Institute, who studies AI in hiring.
“他们正在迅速增加,他们很快,他们相对便宜, - 他们无处不在,”亚历克斯·恩格尔说,他是布鲁金斯学会的一名研究员,专注于招聘中的人工智能研究。

“But at this point there’s very little incentive to build these tools in a way that’s not biased,” he added, saying the cost and time involved in thoroughly testing a system for bias was likely to be prohibitive without regulations requiring it.
“但是这个时候几乎没有动力一种偏见的方式构建这些工具,”补充道,彻底测试一个系统检测偏见成本时间可能不可承受的如果没有要求法规。

For Mr Carballo, racial bias is his top-most concern.
对于卡巴略先生,种族偏见关心的问题。

“I worry these algorithms aren’t designed by people like me, and they aren’t designed to pick people like me,” he said, adding that he has undergone a plethora of different AI assessments - from video analytics to custom logic games.
“我担心这些算法不是这样的人设计的,而且它们也不是为了挑选这样的人而设计的,”他说,并且补充经历了大量不同的AI评估-视频分析定制逻辑游戏。

The risk of discrimination is also a central issue for lawmakers around the world as they weigh how to regulate the use of AI technology, particularly in the labour market.
歧视风险也是全球立法者面临的一个核心问题因为他们权衡如何规范AI技术的使用特别是在劳动市场。

While the EU is set to impose rules on the use of AI in hiring, US lawmakers are considering federal laws to address algorithmic bias.
虽然欧盟准备招聘使用人工智能实施规则,美国立法者正在考虑联邦法律解决算法偏见。

Last year, legislators in New York City proposed a law specifically to regulate AI in hiring.
去年,纽约市的立法者提出了一项法律,专门用于规范人工智能招聘中的应用。

“We’re approaching an inflection point,” Mr Enger said.
“我们正接近一个转折点,”恩格先生说。

According to the most recent survey by human resource (HR) industry group Mercer, more than 55 per cent of HR managers in the United States use predictive algorithms to help them make hiring choices.
根据人力资源(HR)行业集团Mercer最新调查,超过55百分之HR经理美国使用预测算法帮助他们做出招聘选择。

AI is being introduced at every stage of the hiring pipeline, from the job adverts that potential applicants see to the analysis and assessment of their applications and resumes.
人工智能正在引入每一个招聘流程的阶段,潜在申请者看到的职位广告他们的申请简历分析评估。

The Covid-19 pandemic has sped up the adoption of such tools.
场 Covid-19疫情加速了这些工具的采用

HireVue, an AI hiring firm that builds tools to analyse and score the answers job applicants give in video interviews, reported a 46 per cent surge in usage this year compared to last.
HireVue出的使46

The rise in AI could represent a real opportunity to root out prejudice in the hiring process, said Manish Raghavan, a computer scientist at Cornell University who studies bias in hiring algorithms.
人工智能崛起可能代表一个真正的机会根除招聘过程中的偏见马尼什·拉戈万他是康奈尔大学的一名计算机科学家研究招聘算法中的偏见

“No one is going to tell you that traditional hiring was equitable,” he said. “And with AI systems we can test them in ways we could never test or audit people’s own biases.”
“没有告诉传统招聘公平的,”他说。“而通过人工智能系统我们可以多种方式测试它们,这是我们从未能够测试或审计人们的偏见。”

Subjecting all candidates to the same interview, judged by the same algorithm, eliminates the subjectivity and bias of people in hiring, said Kevin Parker, chief executive of HireVue.
对所有候选人进行相同的面试,相同的算法进行评判,消除了招聘中人们的主观性偏见,凯文·帕克HireVue的首席执行官说。

“We can measure how men and women score, and compare how people of colour score against white candidates,” he said.
“我们可以测量男性和女性的得分,并比较有色人种的得分与白人候选人的得分,”他说。

“We really try to fine-tune the algorithm to eliminate anything that can cause adverse impact, and come to very close parity.”
“我们真的努力微调算法,以消除任何可能造成不良影响的因素,并达到非常接近的平衡。”

But the problem, Mr Raghavan said, is that when you build a machine learning algorithm, bias can creep into it in many ways that are difficult to detect.
但是,拉戈万先生说,问题在于,当你构建一个机器学习算法时,偏见可能以许多难以察觉的方式渗入其中。

Mr Enger echoed that view.
先生恩格呼应了那个观点。

“Natural language processing systems have been shown to associate white names as being more qualified. Resume screening systems have been shown to weed out all applicants who went to a women’s college,” he said.
“自然语言处理系统证明白人名字相关联认为有资格。简历筛选系统证明排除所有申请者就读女子学院,”说。

“It’s a minefield,” he added.
“这是一雷区,”补充道。

For job seekers like Mr Carballo - who belong to ethnic minorities and have disadvantaged backgrounds - automated tools can easily reinforce patterns of discrimination, Mr Raghavan said.
对于卡巴洛先生这样的求职者-他们属于少数民族并且不利的背景-自动化工具可以轻易强化歧视的模式,拉哈万先生说。

In 2017, Amazon stopped using an AI resume screener after discovering it penalised resumes that included the word “women”, automatically down grading graduates of all-women’s colleges.
2017 年,亚马逊停止使用一个人工智能简历筛选器,因为发现包含“女性”这个词的简历进行了自动降级,使所有女性学院的毕业生受到影响。

Because applicants often have no way of understanding how they were scored, they are left wondering if bias crept in, Mr Carballo said.
因为申请者常常无法理解他们是如何被评分的,他们感到困惑,如果偏见存在,卡巴洛先生说。

“I’m a first-generation college student, I’m Latino, and I didn’t go to a top university - and every time I get a rejection, I wonder if the system was designed to weed someone like me out.”
“我一名-第一代大学学生,拉丁裔,而且没有顶尖大学-每次收到拒信时,都会知道这个系统是否是为了筛选像我这样的人而设计的。”

Industry is eager to be perceived as fighting bias, Mr Raghavan said, citing his own research showing that 12 of the 15 largest firms have announced some efforts to tackle discrimination.
行业渴望视为偏见作斗争,拉戈万先生说,引用他自己的研究显示,15最大公司中有12已宣布采取一些措施来应对歧视。

But Mr Enger said there was currently little incentive for companies to invest significant resources in detecting and rooting out bias, as regulators are not yet cracking down.
但恩格先生表示,目前公司在检测和根除偏见方面投入大量资源的动力很小,因为监管机构尚未采取严厉措施。

That could start to change, however, as policy makers begin to take a look at the industry.
然而,可能开始改变,因为政策制定者开始关注这个行业。

Regulatory proposals being considered by the European parliament would designate AI used in hiring as “high- risk”, meaning any companies selling such systems would have to be included in a public data-base.
监管提案正在欧洲议会考虑用于招聘人工智能指定为“高风险”,这意味着任何销售此类系统公司必须纳入一个公共数据库

It would also impose requirements on firms selling such tools in the EU, such as ensuring datasets are “relevant, representative, free of errors and complete”, according to Daniel Leufer, an analyst at digital rights group Access Now.
这也将对在欧盟销售此类工具的公司施加要求,例如确保数据集“相关、具有代表性、无错误且完整”,根据数字权利组织丹尼尔·勒费尔的分析师AccessNow的说法。

Mr Leufer said the draft regulations do not go far enough, calling for a blanket ban on certain AI tools in hiring, including any that use biometric information such as facial movements or voice tone.
勒费尔先生表示草案规定不够严格,呼吁某些AI工具实施全面禁令,包括任何使用生物识别信息工具,例如面部动作语音音调。

“The length of my nose; how I speak, the way I move my mouth; we should not allow people to make inferences about someone’s job performance from these kinds of inputs,” he said.
“我的鼻子长度;我说话的方式,我移动嘴巴的方式;我们不应该让人们根据这些来推断某人的工作表现,”他说。

In New York City, the city council is considering a law that would regulate the AI hiring industry, and compel companies to do their own audits for bias, but critics fear it will not be sufficient to rein in discrimination.
在纽约市,市议会正在考虑一项法律,该法律规范人工智能招聘行业,并迫使公司进行自己的偏见审计,批评者担心不足以遏制歧视。

“One flawed algorithm can impact hundreds of millions of people,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), who wants a freeze on AI in hiring pending further bias investigations.
“一个有缺陷的算法可以影响数亿人,”阿尔伯特福克斯卡恩说,监控技术监督项目(STOP)执行董事,希望进一步偏见调查之前人工智能招聘方面进行冻结。

STOP and 11 other digital and civil rights groups sent a letter to New York City Council late last year, asking for stronger protections, including allowing applicants who were discriminated against to file lawsuits.
停止和11其他数字和公民权利团体在去年底纽约议会发送了一封信,要求提供更强的保护,包括允许遭受歧视的申请者提起诉讼。

“We need to press pause until we are able to come up with effective regulatory structures to block AI bias and discrimination,” Mr Cahn said.
“我们需要暂停,直到我们能够提出有效的监管结构阻止人工智能偏见歧视,”卡恩先生说。

In April, after working a string of short-term temporary jobs over the past year, Mr Carballo finally got a full-time job at a law firm.
在四月,经过过去一年一系列短期临时工作的努力,卡巴略先生终于在一家律师事务所找到了全职工作。

The hiring manager interviewed him without the use of an AI screener.
招聘经理面试了他而没有使用人工智能筛选器。

“I think that made a difference - I wasn’t just a guy from a rough neighbourhood, with a Spanish last name,” he said.
“我认为有所不同-不仅仅一个来自贫困社区的家伙,还有一个西班牙姓氏,”他说。

“I was able to make an impression.”
“我能够留下深刻印象。”

Report (C): The Unseen Assistant
报告 (C): T看不见的助手

Artificial intelligence is already doing a lot for us behind the scenes, and a surge of new and better applications is on the way.
人工智能已经我们很多幕后工作,新的更好应用即将到来。

For all the fear mongering around artificial intelligence (AI) taking our jobs and ruling our lives, it has taken 70 years for the technology to get to the stage where it can perform basic human functions at scale and speed.
对于所有关于人工智能恐慌(AI)夺走我们的工作统治我们的生活,已经花费70年的时间这项技术达到能够规模速度执行基本人类功能阶段

AI can now beat professional chess players, answer customer queries, detect fraud, diagnose diseases, and guide stock market investments.
人工智能现在可以击败职业棋手、回答客户查询、检测欺诈、诊断疾病以及指导股票市场投资。

In fact, a lot of our interactions today are already being shaped by mainstream AI without our even knowing it.
实际上,很多我们的互动今天已经主流人工智能塑造,我们甚至没有意识到。

And while the world was in lockdown, it did a lot of the things socially isolated humans otherwise couldn't: Processing mortgage holidays and small-business loan applications;