BMH5102 Talent Assessment & Selection
BMH5102 人才评估与选拔
Group Study & Presentation for Lesson 4
小组学习与课程演示4
Artificial Intelligence and Applicant Screening
人工智能与申请人筛选
Please read the following writeups about the use of artificial intelligence in applicant screening and then answer the questions that follow from the perspective of talent assessment and selection.
请阅读以下关于人工智能在申请人筛选中使用的写作,然后回答后面的问题,从人才评估和选拔的角度。
Report (A): How Big Law Firms Are Using AI to Cut Down on Hiring Biases
报告 (A): 大型法律事务所如何利用人工智能来减少招聘偏见
More and more law firms are turning to behavioural psychology and artificial intelligence in their hiring and recruiting processes, including Shearman & Sterling, Cadwalader, Wickersham & Taft, and Willkie Farr & Gallagher.
越来越多的律师事务所正在将行为心理学和人工智能应用于招聘和录用流程,包括谢尔曼与斯特林、卡德瓦尔德、威克舍姆与塔夫特,以及威尔基·法尔与加拉赫。
The practice is meant to help bring some empirical science into the hiring process and bring on more gender and racially diverse attorneys but the process is not without its pitfalls, which can include reinforcing biases themselves.
这个实践旨在帮助将一些实证科学引入招聘过程中,并且引入更多性别和种族多样化的律师,但这个过程并非没有其陷阱,这些陷阱可能包括强化偏见本身。
Companies such as Suited, Thine and Parker Analytics are using machine learning, artificial intelligence, and behavioural psychology to create better-informed systems for recruiting attorneys.
像 Suited、Thine 和 Parker Analytics 这样的公司正在利用机器学习、人工智能和行为心理学来创建更为明智的律师招聘系统。
Firms are turning to these companies to recruit more diverse classes of attorneys who, theoretically, are more likely to become successful lawyers down the road.
公司正在转向这些公司以招聘更多多样化的律师班级,他们理论上更有可能成为成功的律师。
But how will law firms determine successful outcomes, and how will these companies avoid bias in the algorithm itself?
但律师事务所将如何确定成功的结果,这些公司又将如何避免算法本身的偏见?
Take, for example, Amazon, which developed an AI tool to screen resumes in 2014 that, according to a 2018 Reuters report, actually led to fewer women making it through the screening process.
以亚马逊为例,该公司在 2014 年开发了一种人工智能工具来筛选简历,2018 年路透社的报道显示,这实际上导致通过筛选过程的女性人数减少。
The company ended up scrapping the tool.
这家公司最终放弃了这个工具。
Typically, firms that want to employ AI and behavioural psychology draw their preferred traits - whether that be team-oriented or adaptability - from successful attorneys, and sometimes business professionals, in their firms.
通常,希望采用人工智能和行为心理学的公司会从其公司中成功的律师,有时也包括商业专业人士那里汲取他们所偏好的特质——无论是团队合作精神还是适应能力。
But the legal industry as a whole has long struggled with hiring women and people of colour, so simply drawing from their own ranks leads to biases.
但是法律行业整体上长期以来一直在招聘女性和有色人种方面挣扎,因此仅仅从他们自己的队伍中抽调人选会导致偏见。
"We go into every model that we're building for each organization assuming that there's bias in the data. The algorithms themselves aren't biased , it's the data that they're trained on where there could be bias," said Matthew Spencer, CEO of Suited, an AI company that serves firms such as Cadwalader and Wilson Sonsini Goodrich & Rosati.
“我们进入每一个模型,假设我们为每个组织构建的模型中存在偏见。这些算法本身并不是有偏见的,而是它们所训练的数据可能存在偏见,”马修·斯宾塞说,他是Suited的首席执行官,这是一家为如Cadwalader和WilsonSonsiniGoodrich及Rosati等公司提供服务的人工智能公司。
Companies such as Suited and Thine try to address these potential biases by running law firms' behavioural matrix against a large data set of attorneys to weed out what the companies call "adverse impacts."
公司如Suited和Thine试图解决这些潜在偏见,通过对大量律师的行为矩阵进行分析,以剔除公司所称的"不利影响"。
Thine CEO and co-founder Sang Lee said Thine's database draws upon thousands of attorneys of all walks of life.
Thine首席执行官和联合创始人李桑表示Thine的数据库汇集了成千上万的律师来自各行各业的人士。
And when Thine is developing a personality matrix for its clients, it interviews the firm's attorneys and also its business professionals such as chief talent officers and internal recruitment staff.
而当您的正在开发一个个性矩阵为其客户,它会采访公司的律师以及其商业专业人士如首席人才官和内部招聘人员。
"We're very intentional about the stakeholders we want designing the traits," Lee said.
“我们非常有意识地选择希望参与设计特征的利益相关者,”李说。
"Instead of thinking about the people that are already successful and trying to emulate that, we bring people in who are successful but are also profoundly diverse - meaning not just the partners."
“与其考虑那些已经成功的人并试图模仿他们,不如引入那些成功但也极具多样性的人 - 这意味着不仅仅是合作伙伴。”
Shearman & Sterling, a client of Thine, was well aware of the potential for bias in tech-assisted recruiting, and had Thine leverage its database to back-test any traits that may negatively affect minority candidates or women.
希尔曼&斯特林,作为您的客户,非常清楚技术辅助招聘中潜在的偏见,并且利用您的数据库进行回测任何可能对少数族裔候选人或女性产生负面影响的特征。
"The characteristics we look for are things like, team-oriented or goal-oriented," said John Nathanson, head of Shearman & Sterling's hiring committee.
“我们寻找的特征包括团队导向或目标导向,”谢尔曼与斯特林的招聘委员会负责人约翰·纳森森说。
"The concepts seem very broad and neutral but we worry, obviously, that we are human beings, and we wanted to back-test it. We had to get and did get confidence and comfort that they won't be inherently biased ."
"这些概念看起来非常广泛且中立,但是我们显然担心我们是人类,而且我们想要进行回测。我们必须获得并且确实获得信心和安慰,以确保它们不会本质上有偏见。"
Still, initial testing and programming have shown successful results in the areas of diversity.
尽管如此,初步测试和编程已经显示出成功的结果在多样性的领域。
"Broadly speaking, these firms are hiring the most diverse classes they've ever hired from the broadest range of law schools that they've ever hired them from," Spencer said.
“广泛来说,这些公司正在招聘他们有史以来最多样化的班级,他们从最广泛的法学院中招聘,这是他们有史以来招聘的,”斯宾塞说。
Firms that have long used this form of interviewing attest to its success.
长期使用这种面试形式的公司证明了它的成功。
According to Andy Colón, chief talent officer at Thompson Hine, the number of Black attorneys at the firm has tripled since it instituted various changes to its interviewing process in 2017.
根据汤普森·海恩的首席人才官安迪·科隆的说法,自 2017 年该公司对面试流程进行各种改进以来,公司的黑人律师数量已增加了三倍。
The number of women and LGTBQ+ attorneys have similarly jumped.
女性和LGTBQ+律师的数量也同样激增。
Additionally, legal hiring AI is still in its infancy.
此外,法律招聘人工智能仍然处于起步阶段。
Few, if any, firms that employ these processes have done so for long enough to reliably measure whether or not the method is successful in bringing in attorneys who will eventually move up to leadership roles in their firms.
很少有公司使用这些流程,甚至可以说没有公司使用得足够长时间,以可靠地衡量该方法是否成功吸引了最终会晋升为公司领导角色的律师。
And there are many variables beyond recruiting that can complicate whether or not a lawyer is successful, such as a firm's capacity to properly train and retain these lawyers.
而且有许多变量超出招聘的范围可能会复杂化律师是否成功,例如一家公司的能力去适当培训和留住这些律师。
Regardless, many firms that employ AI see it as simply another input in conjunction with the traditional process of Big Law hiring.
无论如何,许多公司将人工智能视为仅仅是另一种输入,与传统的大律师事务所招聘过程结合使用。
Shearman & Sterling only considers the results of a candidate's test at the end, when the hiring committee is considering who will receive an offer.
希尔曼与斯特林仅在招聘委员会考虑谁将获得录用时,才会考虑候选人测试的结果。
Results aren't shared with interviewers in the OCI (on-campus interviewing) or call back process.
结果不会在 OCI(校园面试)或回访过程中与面试官分享。
Others, like Thompson Hine, find it best to use the results as a talking point during the interviewing process.
其他人,像汤普森海因,发现最好的是在面试过程中将结果作为谈话的切入点。
"We want to have people do interviewing based not on their preconceived notions of behaviours because confirmation bias can seep in," said John Nathanson, head of Shearman & Sterling's hiring committee.
“我们希望人们在面试时不基于他们对行为的先入之见,因为确认偏误可能会渗入,”谢尔曼与斯特林的招聘委员会负责人约翰·纳森森说。
"We don't want [the interviewer] to say, 'this person scored low on this.' We want it to be just one part of the file."
"我们不想让[面试官]说,'这个人在这个方面得分很低。我们希望这只是文件的一部分。"
And in the end, the predominant method of interviewing - focusing on class and school rank and summer work experience - is so laden with bias, critics say, that introducing this sort of scientific rigor can only help.
最终,批评者表示,主要的面试方法——关注班级和学校排名以及暑期工作经验——充满了偏见,引入这种科学严谨性只会有所帮助。
"Rank of their law school, GPA - not only are those not predictive of long-term performance, they are biased in many ways," Spencer said.
“他们的法学院排名、GPA——这些不仅不能预测长期表现,而且在许多方面存在偏见,”斯宾塞说。
"You're never going to get 100% solution, but what it is doing is driving substantially higher success rates and substantially less bias."
“你永远不会得到100%的解决方案,但它所做的是大幅提高成功率,并大幅减少偏见。”
Report (B): How Racist Robots Are Being Used in Recruitment
报告 (B): HowRacistRobotsAreBeingUsedinRecruitment
Some systems have been shown to associate white names with being more qualified and to weed out applicants who went to women’s colleges
一些系统已被证明将白人姓名与更有资格的人联系,并排除那些申请者,她们曾就读于女子学院
Since graduating from a US university four years ago, Kevin Carballo has lost count of the number of times he has applied for a job only to receive a swift, automated rejection email - sometimes just hours after applying.
自从四年前从美国大学毕业以来,凯文·卡巴洛已经记不清他申请工作的次数了,每次都只收到一封快速的、自动化的拒绝邮件 - 有时在申请后仅仅几个小时。
Like many job seekers around the world, Mr Carballo’s applications are increasingly being screened by algorithms built to automatically flag attractive applicants to hiring managers.
像许多求职者一样在世界各地,卡巴略先生的申请越来越多地被算法筛选,这些算法旨在自动标记吸引人的申请者给招聘经理。
“There’s no way to apply for a job these days without being analysed by some sort of automated system,” said Mr Carballo, 27, who is Latino and the first member of his family to go to university.
“没有办法在这些日子里申请工作而不被某种自动化系统分析,”卡巴洛先生(27 岁)说,他是拉丁裔,也是他家里第一个上大学的人。
“It feels like shooting in the dark while being blindfolded - there’s just no way for me to tell my full story when a machine is assessing me,” Mr Carballo, who hoped to get work experience at a law firm before applying to law school, told the Thomson Reuters Foundation by phone.
“这感觉就像在黑暗中射击而被蒙住眼睛- 我根本无法在机器评估我的时候讲述我的完整故事,”卡巴洛先生说,他希望在申请法学院之前在一家律师事务所获得工作经验,告诉汤森路透基金会。 电话。
From Artificial Intelligence (AI) programmes that assess an applicant’s facial expressions during a video interview, to resume screening platforms predicting job performance, the AI recruitment industry is valued at more than $500 million (£350 million).
来自人工智能(AI)程序,评估申请者在视频面试中的面部表情,简历筛选平台预测工作表现,人工智能招聘行业的价值超过$500百万(£350百万)。
“They are proliferating, they are fast, they are relatively cheap - they are everywhere,” said Alex Enger, a fellow at the Brookings Institute, who studies AI in hiring.
“他们正在迅速增加,他们很快,他们相对便宜, - 他们无处不在,”亚历克斯·恩格尔说,他是布鲁金斯学会的一名研究员,专注于招聘中的人工智能研究。
“But at this point there’s very little incentive to build these tools in a way that’s not biased,” he added, saying the cost and time involved in thoroughly testing a system for bias was likely to be prohibitive without regulations requiring it.
“但是在这个时候几乎没有动力去以一种不偏见的方式构建这些工具,”他补充道,说彻底测试一个系统以检测偏见所需的成本和时间可能是不可承受的,如果没有要求的法规。
For Mr Carballo, racial bias is his top-most concern.
对于卡巴略先生,种族偏见是他最关心的问题。
“I worry these algorithms aren’t designed by people like me, and they aren’t designed to pick people like me,” he said, adding that he has undergone a plethora of different AI assessments - from video analytics to custom logic games.
“我担心这些算法不是由像我这样的人设计的,而且它们也不是为了挑选像我这样的人而设计的,”他说,并且补充说他经历了大量不同的AI评估-从视频分析到定制逻辑游戏。
The risk of discrimination is also a central issue for lawmakers around the world as they weigh how to regulate the use of AI technology, particularly in the labour market.
歧视的风险也是全球立法者面临的一个核心问题,因为他们在权衡如何规范AI技术的使用,特别是在劳动市场。
While the EU is set to impose rules on the use of AI in hiring, US lawmakers are considering federal laws to address algorithmic bias.
虽然欧盟准备对招聘中使用人工智能实施规则,美国立法者正在考虑联邦法律以解决算法偏见。
Last year, legislators in New York City proposed a law specifically to regulate AI in hiring.
去年,纽约市的立法者提出了一项法律,专门用于规范人工智能在招聘中的应用。
“We’re approaching an inflection point,” Mr Enger said.
“我们正接近一个转折点,”恩格先生说。
According to the most recent survey by human resource (HR) industry group Mercer, more than 55 per cent of HR managers in the United States use predictive algorithms to help them make hiring choices.
根据人力资源(HR)行业集团Mercer的最新调查,超过55百分之的HR经理在美国使用预测算法来帮助他们做出招聘选择。
AI is being introduced at every stage of the hiring pipeline, from the job adverts that potential applicants see to the analysis and assessment of their applications and resumes.
人工智能正在被引入每一个招聘流程的阶段,从潜在申请者看到的职位广告到他们的申请和简历的分析和评估。
The Covid-19 pandemic has sped up the adoption of such tools.
这场 Covid-19疫情加速了这些工具的采用。
HireVue, an AI hiring firm that builds tools to analyse and score the answers job applicants give in video interviews, reported a 46 per cent surge in usage this year compared to last.
HireVue,一家人工智能招聘公司,它构建工具来分析和打分求职者在视频面试中给出的答案,报道今年的使用量比去年增加了46个百分之多。
The rise in AI could represent a real opportunity to root out prejudice in the hiring process, said Manish Raghavan, a computer scientist at Cornell University who studies bias in hiring algorithms.
人工智能的崛起可能代表一个真正的机会来根除招聘过程中的偏见,马尼什·拉戈万说,他是康奈尔大学的一名计算机科学家,研究招聘算法中的偏见。
“No one is going to tell you that traditional hiring was equitable,” he said. “And with AI systems we can test them in ways we could never test or audit people’s own biases.”
“没有人会告诉你传统招聘是公平的,”他说。“而通过人工智能系统我们可以以多种方式测试它们,这是我们从未能够测试或审计人们的偏见。”
Subjecting all candidates to the same interview, judged by the same algorithm, eliminates the subjectivity and bias of people in hiring, said Kevin Parker, chief executive of HireVue.
对所有候选人进行相同的面试,由相同的算法进行评判,消除了招聘中人们的主观性和偏见,凯文·帕克,HireVue的首席执行官说。
“We can measure how men and women score, and compare how people of colour score against white candidates,” he said.
“我们可以测量男性和女性的得分,并比较有色人种的得分与白人候选人的得分,”他说。
“We really try to fine-tune the algorithm to eliminate anything that can cause adverse impact, and come to very close parity.”
“我们真的努力微调算法,以消除任何可能造成不良影响的因素,并达到非常接近的平衡。”
But the problem, Mr Raghavan said, is that when you build a machine learning algorithm, bias can creep into it in many ways that are difficult to detect.
但是,拉戈万先生说,问题在于,当你构建一个机器学习算法时,偏见可能以许多难以察觉的方式渗入其中。
Mr Enger echoed that view.
先生恩格呼应了那个观点。
“Natural language processing systems have been shown to associate white names as being more qualified. Resume screening systems have been shown to weed out all applicants who went to a women’s college,” he said.
“自然语言处理系统已被证明与白人名字相关联被认为更有资格。简历筛选系统已被证明会排除所有申请者谁曾就读于女子学院,”他说。
“It’s a minefield,” he added.
“这是一个雷区,”他补充道。
For job seekers like Mr Carballo - who belong to ethnic minorities and have disadvantaged backgrounds - automated tools can easily reinforce patterns of discrimination, Mr Raghavan said.
对于像卡巴洛先生这样的求职者-他们属于少数民族并且有不利的背景-自动化工具可以轻易强化歧视的模式,拉哈万先生说。
In 2017, Amazon stopped using an AI resume screener after discovering it penalised resumes that included the word “women”, automatically down grading graduates of all-women’s colleges.
在2017 年,亚马逊停止使用一个人工智能简历筛选器,因为发现它对包含“女性”这个词的简历进行了自动降级,使所有女性学院的毕业生受到影响。
Because applicants often have no way of understanding how they were scored, they are left wondering if bias crept in, Mr Carballo said.
因为申请者常常无法理解他们是如何被评分的,他们会感到困惑,如果偏见存在,卡巴洛先生说。
“I’m a first-generation college student, I’m Latino, and I didn’t go to a top university - and every time I get a rejection, I wonder if the system was designed to weed someone like me out.”
“我是一名-第一代大学学生,我是拉丁裔,而且没有去顶尖大学-每次收到拒信时,我都会想知道这个系统是否是为了筛选像我这样的人而设计的。”
Industry is eager to be perceived as fighting bias, Mr Raghavan said, citing his own research showing that 12 of the 15 largest firms have announced some efforts to tackle discrimination.
行业渴望被视为在与偏见作斗争,拉戈万先生说,引用他自己的研究显示,15家最大公司中有12家已宣布采取一些措施来应对歧视。
But Mr Enger said there was currently little incentive for companies to invest significant resources in detecting and rooting out bias, as regulators are not yet cracking down.
但恩格先生表示,目前公司在检测和根除偏见方面投入大量资源的动力很小,因为监管机构尚未采取严厉措施。
That could start to change, however, as policy makers begin to take a look at the industry.
然而,这可能会开始改变,因为政策制定者们开始关注这个行业。
Regulatory proposals being considered by the European parliament would designate AI used in hiring as “high- risk”, meaning any companies selling such systems would have to be included in a public data-base.
监管提案正在被欧洲议会考虑将用于招聘的人工智能指定为“高风险”,这意味着任何销售此类系统的公司都必须被纳入一个公共数据库中。
It would also impose requirements on firms selling such tools in the EU, such as ensuring datasets are “relevant, representative, free of errors and complete”, according to Daniel Leufer, an analyst at digital rights group Access Now.
这也将对在欧盟销售此类工具的公司施加要求,例如确保数据集“相关、具有代表性、无错误且完整”,根据数字权利组织丹尼尔·勒费尔的分析师AccessNow的说法。
Mr Leufer said the draft regulations do not go far enough, calling for a blanket ban on certain AI tools in hiring, including any that use biometric information such as facial movements or voice tone.
勒费尔先生表示草案规定不够严格,呼吁对某些AI工具实施全面禁令,包括任何使用生物识别信息的工具,例如面部动作或语音音调。
“The length of my nose; how I speak, the way I move my mouth; we should not allow people to make inferences about someone’s job performance from these kinds of inputs,” he said.
“我的鼻子长度;我说话的方式,我移动嘴巴的方式;我们不应该让人们根据这些来推断某人的工作表现,”他说。
In New York City, the city council is considering a law that would regulate the AI hiring industry, and compel companies to do their own audits for bias, but critics fear it will not be sufficient to rein in discrimination.
在纽约市,市议会正在考虑一项法律,该法律将规范人工智能招聘行业,并迫使公司进行自己的偏见审计,但批评者担心这将不足以遏制歧视。
“One flawed algorithm can impact hundreds of millions of people,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), who wants a freeze on AI in hiring pending further bias investigations.
“一个有缺陷的算法可以影响数亿人,”阿尔伯特福克斯卡恩说,监控技术监督项目(STOP)的执行董事,他希望在进一步的偏见调查之前对人工智能在招聘方面进行冻结。
STOP and 11 other digital and civil rights groups sent a letter to New York City Council late last year, asking for stronger protections, including allowing applicants who were discriminated against to file lawsuits.
停止和11其他数字和公民权利团体在去年底向纽约市议会发送了一封信,要求提供更强的保护,包括允许遭受歧视的申请者提起诉讼。
“We need to press pause until we are able to come up with effective regulatory structures to block AI bias and discrimination,” Mr Cahn said.
“我们需要按下暂停,直到我们能够提出有效的监管结构来阻止人工智能偏见和歧视,”卡恩先生说。
In April, after working a string of short-term temporary jobs over the past year, Mr Carballo finally got a full-time job at a law firm.
在四月,经过过去一年一系列短期临时工作的努力,卡巴略先生终于在一家律师事务所找到了全职工作。
The hiring manager interviewed him without the use of an AI screener.
招聘经理面试了他而没有使用人工智能筛选器。
“I think that made a difference - I wasn’t just a guy from a rough neighbourhood, with a Spanish last name,” he said.
“我认为这有所不同-不仅仅是一个来自贫困社区的家伙,还有一个西班牙姓氏,”他说。
“I was able to make an impression.”
“我能够留下深刻印象。”
Report (C): The Unseen Assistant
报告 (C): T他看不见的助手
Artificial intelligence is already doing a lot for us behind the scenes, and a surge of new and better applications is on the way.
人工智能已经为我们做了很多幕后的工作,而且新的和更好的应用即将到来。
For all the fear mongering around artificial intelligence (AI) taking our jobs and ruling our lives, it has taken 70 years for the technology to get to the stage where it can perform basic human functions at scale and speed.
对于所有关于人工智能的恐慌(AI)会夺走我们的工作并统治我们的生活,它已经花费70年的时间让这项技术达到能够在规模和速度上执行基本人类功能的阶段。
AI can now beat professional chess players, answer customer queries, detect fraud, diagnose diseases, and guide stock market investments.
人工智能现在可以击败职业棋手、回答客户查询、检测欺诈、诊断疾病以及指导股票市场投资。
In fact, a lot of our interactions today are already being shaped by mainstream AI without our even knowing it.
实际上,很多我们的互动今天已经被主流人工智能塑造,而我们甚至没有意识到。
And while the world was in lockdown, it did a lot of the things socially isolated humans otherwise couldn't: Processing mortgage holidays and small-business loan applications; tracking personal protective equipment; reducing development time for a Covid-19 vaccine.
而当世界处于封锁状态时,它做了很多社会上孤立的人类否则无法做到的事情:P处理抵押假期和小企业贷款申请;追踪个人防护装备;缩短开发时间以研发一种新冠-19疫苗。
Without AI, Covid-19 might have been a lot less bearable.
没有人工智能,新冠病毒可能会变得更加难以忍受。
"I worked with some large and small organizations during the pandemic; and if they didn't have AI, they wouldn't have been able to respond to increases in customer enquiries," says Toby Cappello, vice president, Cloud and Cognitive Software Expert Labs at IBM.
“我在疫情期间与一些大型和小型组织合作;如果他们没有人工智能,他们就无法应对客户咨询的增加,”托比·卡佩洛说,IBM云计算和认知软件专家实验室副总裁。
GM Financial, the financing arm of the automotive giant, saw live-chat requests on its mobile app soar after Covid-19 hit.
通用汽车金融作为汽车巨头的融资部门,在新冠疫情爆发后,其移动应用上的在线聊天请求激增。
An AI assistant that handled 50% to 60% of live requests was able to resolve approximately 90% of the questions without any human intervention.
一个处理 50%到 60%实时请求的人工智能助手能够在没有任何人工干预的情况下解决大约 90%的问题。
"I'm seeing tremendous value delivered by AI," says Cappello.
“我看到人工智能带来了巨大的价值,”卡佩罗说。
"[It is] transformational, eye-opening, and surprising to many organizations."
"[它是]变革性的,开阔眼界的,并且令人惊讶对许多组织而言."
During the pandemic, banks tapped AI to speed up document processing, cutting down mortgage processing from months to hours, according to Adrian Poole, director of financial services, UK and Ireland, Google Cloud.
在疫情期间,银行利用人工智能加快文件处理,按照谷歌云英国和爱尔兰金融服务总监阿德里安·普尔的说法,将抵押贷款处理时间从几个月缩短到几小时。
At DBS in Singapore, chatbots, which typically service more than 80% of information requests in English or Mandarin, helped consumers and corporate customers check to see if they qualified for economic relief measures, explains Jimmy Ng, group chief information officer and head of group technology and operations at DBS.
在新加坡的DBS,聊天机器人通常处理超过80%的信息请求,以英语或普通话,帮助消费者和企业客户查看他们是否符合经济救助措施的资格,吉米Ng 解释道,集团首席信息官和集团技术负责人。在 DBS 的操作。
In Russia, Tinkoff Bank estimates that AI chatbots and voice robots in its call centres save the bank approximately 250 million rubbles ($3.3 million) a month.
在俄罗斯,丁科夫银行估计,其呼叫中心的人工智能聊天机器人和语音机器人每月为银行节省约 2.5 亿卢布(330 万美元)。
And Konstantin Markelov, vice president of business technologies at Tinkoff, says that by reinforcing its antifraud systems with machine-learning models, it has cut payment fraud in half.
康斯坦丁·马尔凯洛夫,Tinkoff 的商业技术副总裁表示,通过用机器学习模型增强其反欺诈系统,该公司已将支付欺诈减少了一半。
Beyond chatbots, BBVA in Spain is using AI to more efficiently enhance cybersecurity and anti-money-laundering systems, to risk score small to midsize enterprises using transactional data, and to analyse customer interactions and communications via multiple channels so they can be dealt with more quickly and effectively.
超越聊天机器人,BBVA在西班牙正在利用人工智能更高效地增强网络安全和反洗钱系统,对小型到中型企业使用交易数据进行风险评分,并且分析客户互动和通过多个渠道的沟通,以便能够更快速和有效地处理。
Beena Ammanath, executive director at the Deloitte AI Institute, says AI is having the biggest impact in data-intensive industries like financial services and pharmaceuticals.
比娜·阿曼纳斯,德勤人工智能研究所的执行董事,表示人工智能在数据密集型行业如金融服务和制药行业产生了最大的影响。
A 2020 State of AI in the Enterprise report from Deloitte, points to even more novel applications of AI: "from creating the rules for new sports to composing music to finding missing children."
德勤 2020 年《企业人工智能现状报告》指出了人工智能的更多新颖应用:“从制定新运动的规则到作曲,再到寻找失踪儿童。”
Startups on CB Insights' 2021 AI 100 list use AI in everything from autonomous vehicles and beehives to waste recycling, elder care, dental imaging, insurance pricing, mineral exploration, and climate risk mitigation.
初创公司在CBInsights'2021人工智能100名单中使用人工智能在从自动驾驶车辆和蜂箱到废物回收、老年护理、牙科成像、保险定价、矿产勘探、以及气候风险缓解。
These applications are a far cry from the futuristic consumer-facing inventions - flying cars and robot maids - many people expected from AI.
这些应用与许多人对人工智能的未来消费面向发明——飞行汽车和机器人女佣——的期望相去甚远。
But our expectations of the technology's capabilities can race ahead of reality.
但是我们的期望对技术的能力可能超越现实。
"I have a good understanding of what AI can and can't do," says Stephen Ritter, chief technology officer at San Diego- based digital identity-verification company Mitek Systems, who has worked in machine learning for more than 30 years.
“我对人工智能能做什么和不能做什么有很好的理解,”位于圣地亚哥的数字身份验证公司 Mitek Systems 的首席技术官斯蒂芬·里特说,他在机器学习领域工作了超过 30 年。
"The general public thinks AI is the Jetsons, robots, and flying cars.
公众认为人工智能是《杰森一家》、机器人和飞行汽车。
That's probably not going to happen for decades and decades."
那可能不会发生几十年和几十年。
What exists today is mainstream, task-oriented AI, he explains, as distinct from "artificial general intelligence," which refers to a time somewhere in the future - 2050 by some accounts - when machines could become "super intelligent" and perform any task a human can.
他解释说,今天存在的是主流的、任务导向的人工智能,这与“人工通用智能”不同,后者指的是未来某个时间——根据一些说法是 2050 年——机器可能会变得“超级智能”,并能够执行人类能做的任何任务。
Market intelligence and advisory firm I DC estimates that spending on AI technologies will skyrocket to $110 billion by 2024 - more than doubling the estimated $50.1 billion spent in 2020.
市场情报和咨询公司DC估计到2024 年在人工智能技术上的支出将飙升至$110十亿-比2020 年估计的$50.1十亿支出超过翻倍。
But not everyone has boarded the A1 train yet.
但并不是每个人都已经上了 A1 列车。
A late-2020 survey of 167 finance organizations in North America, Europe, the Middle East, Africa, and Asia by Gartner saw AI trumped by cloud enterprise resource planning (ERP) systems and advanced data analytics as CFOs' top technology priorities.
2020 年末,Gartner 对北美、欧洲、中东、非洲和亚洲的 167 家金融组织进行的调查显示,云企业资源规划(ERP)系统和先进数据分析超越了人工智能,成为首席财务官的首要技术优先事项。
Just 13% of CFOs, according to Gartner's survey, plan to invest in AI in the next three years, compared to 64% who will plough investment into cloud ERP systems.
仅13%的CFO,根据Gartner的调查,计划在未来三年内投资于人工智能,而相比之下,64%的人将投入投资于云ERP系统。
So why are CFOs less bullish than other parts of the organization when it comes to AI?
那么,为什么首席财务官在人工智能方面的看法比组织其他部门更不乐观呢?
The biggest hindrance for them, according to Steve Adams, an analyst with Gartner, is their ability to predict, forecast and measure AI's return on investment.
根据 Gartner 的分析师 Steve Adams,他们面临的最大障碍是预测、预估和衡量人工智能的投资回报能力。
"CFOs tend to think about things through the lens of dollars and cents," he explains.
“首席财务官倾向于从美元和分的角度考虑问题,”他解释说。
"AI technology is relatively new and there are so many potential applications."
人工智能技术相对较新,潜在应用非常多。
So far, the industries that have found the most use cases for AI are financial services, logistics, and transportation; but that's not to say CFOs aren't interested, Adams says, adding that they have been actively and thoughtfully asking questions about AI technologies and applications in corporate finance.
到目前为止,发现人工智能应用案例最多的行业是金融服务、物流和运输;但这并不意味着CFO们不感兴趣,亚当斯说,并补充说他们一直在积极而且深思熟虑地询问关于人工智能技术和应用9>在企业金融。
Adams doesn't believe we'll see happen with AI what happened with blockchain, which only 3% of CFOs voted for as a top technology priority in Gartner's survey.
亚当斯不相信我们会在人工智能上看到与区块链相同的情况,在高德纳的调查中,只有 3%的首席财务官将其投票为首要技术优先事项。
"Blockchain was going to change the world," he says, but that turned out not to be the case.
“区块链将改变世界,”他说,但事实并非如此。
"Whether AI meets, exceeds, or outperforms depends on our expectations," he notes.
“是否人工智能满足、超越、或表现优于取决于我们的期望,”他指出。
"But if AI doesn't have access to vast amounts of data, it will be difficult for it to provide truly revolutionary applications."
“但是如果人工智能没有访问大量数据,它将很难以提供真正革命性的应用。
Data is the grease for AI, helping it drive richer and seemingly more-accurate interactions between organizations and their customers.
数据是人工智能的润滑剂,帮助它推动组织与客户之间更丰富且看似更准确的互动。
For example, one of BBVA's strategic priorities is to use AI to offer customers more-personalized banking experiences based on their unique financial circumstances.
例如,BBVA的战略优先事项之一是利用人工智能为客户提供更多个性化的银行体验,基于他们独特的财务情况。
"We've developed forecasting models in order to anticipate their financial situation several weeks in advance," says Alvaro Martin Enriquez, head of data strategy at BBVA.
“我们已经开发了预测模型,以便提前几周预测他们的财务状况,”BBVA 的数据战略负责人阿尔瓦罗·马丁·恩里克斯说。
"Through these models we foresee undesired situations, like insufficient funds in an account to face a direct debit, and we bring this information to their attention together with actionable solutions."
通过这些模型,我们预见到不希望出现的情况,比如账户中资金不足以应对直接扣款,并将这些信息与可行的解决方案一起提醒他们。
Another AI tool developed by the bank even allows companies to learn the estimated amount of greenhouse gas emissions related to their daily activities.
另一个人工智能工具由银行开发甚至允许公司了解与其日常活动相关的温室气体排放估计数量of greenhouse gas emissions相关于他们的日常活动。
The use of customer data - or any data for that matter - by AI algorithms can raise a host of regulatory, ethical, and moral questions.
AI 算法对客户数据或任何数据的使用可能引发一系列监管、伦理和道德问题。
How is the data being used, how accurate is it and how transparent is that process to the end consumer?
数据是如何被使用的,它有多准确,以及这个过程对最终消费者有多透明?
"For us to get more comfortable with AI. we need to have more transparency," says Lisa Palmer, chief technical adviser at Splunk, a data software company that investigates, monitors, analyses, and acts on data.
“为了让我们对人工智能更加舒适,我们需要更多的透明度,”数据软件公司 Splunk 的首席技术顾问 Lisa Palmer 说,该公司负责调查、监控、分析和处理数据。
"There may be situations where people have a discomfort level caused by not knowing what they're interacting with and how decisions are being made.
可能会出现人们因不知道自己正在与什么互动以及决策是如何做出的而感到不适的情况。
This is what we mean by explainable AI: Making the "black box' a "glass box”.
这是我们所指的可解释人工智能:Makingthe"黑箱'变成"透明箱。
I don't think we’ll get past the social angst around A1 until we have this explainability."
不要认为我们会克服关于 A1 的社交焦虑,直到我们有了这个可解释性。
While conventional AI techniques like machine learning, deep learning, or neural networks define conventional AI approaches, their Achilles' heel, says Yonatan Hagos, chief product officer at AI software-engineering company Beyond Limits, is that they cannot explain how they arrive at an answer.
传统的人工智能技术,如机器学习、深度学习或神经网络,定义了常规的人工智能方法,但人工智能软件工程公司 Beyond Limits 的首席产品官 Yonatan Hagos 表示,它们的致命弱点在于无法解释它们是如何得出答案的。
Hagos says cognitive AI solutions like the ones Beyond Limits uses take large data sets, then apply a layer of human knowledge and business logic to provide more-accurate recommendations.
哈戈斯说认知人工智能解决方案像《超越极限》使用大量数据集,然后应用一层人类知识和商业逻辑来提供更加准确的推荐。
"Credit and loan candidate identification is a great example of this, where you have large quantities of data but also need to apply a certain layer of domain expertise," he explains.
“信用和贷款候选人识别就是一个很好的例子,在这里你有大量的数据,但也需要应用一定层次的领域专业知识,”他解释道。
Explainable Al is necessary, says Hagos, in high-value, high-risk industries like energy, health care, and finance, as it provides users with transparent and interactive audit trails explaining recommended operational remedial actions.
可解释的人工智能是必要的,哈戈斯说,在高价值、高-风险行业如能源、医疗保健和金融,因为它为用户提供透明和互动的审计轨迹,解释推荐的操作补救措施。
An October 2019 report by the Bank of England and the UK's Financial Conduct Authority regarding machine learning in UK financial services highlights potential risks around explainability, "meaning that the inner working of a model cannot always be easily understood and summarized," and associated data-quality issues (including biased data) that the report's authors note could negatively impact "consumers' ability to use products and services, or even engage with firms."
一份2019 年 10 月的报告由英格兰银行和英国金融行为监管局发布,关于机器学习在英国金融服务中的潜在风险,强调可解释性的潜在风险,“这意味着模型的内部工作并不总是容易理解和总结,”以及相关数据-问题(包括偏见数据)这份报告的作者指出可能会对“消费者的使用产品和服务的能力产生负面影响,甚至与公司互动。
At Tinkoff Bank in Russia, Markelov says it does not use an AI algorithm as the final decision maker in credit scoring, but incorporates a neural network (Al)-derived score.
在丁科夫银行,马尔凯洛夫表示它并不使用 A算法作为信用评分的最终决策者,而是结合了神经网络(AI)衍生的评分。
A separate model, he says, allows the bank to smooth over any outliers in AI scoring.
他表示,一个独立的模型使银行能够平滑处理任何人工智能评分中的异常值。
Ng of DBS Bank in Singapore says its virtual bank-recruiting tool, Jobs Intelligence Maestro (or JIM), which it launched in 2018 for higher-volume roles, helps remove unconscious human bias in the screening process by specifically focusing on skills required for each role.
新加坡的DBS银行表示其虚拟银行招聘工具JobsIntelligenceMaestro(或JIM),该工具于2018年推出,用于高需求职位,有助于消除无意识的人为偏见,通过专注于每个职位所需的技能来实现这一点。
"That said, we do incorporate several safeguards, including a regular review of algorithms to ensure that we do not set in bias," he says.
“这么说,我们确实采取了几个保障措施,包括定期审查算法以确保我们不会引入偏见,”他说。
DBS also uses a data-governance framework called PURE (Purposeful, Unsurprising, Respectful, Explainable), against which it assesses all its AI data-use cases.
DBS 还使用一个名为 PURE(有目的的、意外的、尊重的、可解释的)数据治理框架,对所有 AI 数据使用案例进行评估。
"We try to be respectful of privacy and look at all data through these four lenses," says Ng.
“我们努力尊重隐私并且通过这四个视角来看所有数据,”Ng说。
Yet, he notes, privacy is subjective.
然而,他指出,隐私是主观的。
"In China where there are potentially cameras everywhere, its probably less of an issue if you use personal data," he explains.
“在中国,几乎到处都有摄像头,如果你使用个人数据,这可能就不是一个问题,”他解释道。
"For each country, it's very different. These questions have to be asked and tailored to each country."
“对于每个国家,这是非常不同的。这些问题必须被提出并根据每个国家进行调整。”
Despite industry efforts to keep AI honest, some high-profile incidents have made AI bias a top regulatory and public concern.
尽管行业努力保持人工智能的公正性,但一些引人注目的事件使得人工智能偏见成为监管和公众关注的焦点。
Last July, MIT withdrew a dataset that had been widely used to train machine learning models to identify objects and people in still images, because it used derogatory language to refer to women and people from minority backgrounds.
去年七月,麻省理工学院撤回了一个数据集,该数据集曾被广泛用于训练机器学习模型以识别静态图像中的物体和人,因为它使用了贬损的语言来指代女性和来自少数族裔背景的人。
In 2018, Amazon stopped using a recruitment tool that screened job applicants after it was shown to be biased against women.
在2018 年,亚马逊停止使用一种招聘工具,该工具在筛选求职者后被证明对女性存在偏见。
Concerns have also been raised about facial-recognition technologies.
人们对面部识别技术也提出了担忧。
Several US cities, including San Francisco and Portland, have banned its use by local government; only Portland has banned its use by private-sector entities.
多个美国城市,包括旧金山和波特兰,已禁止其使用由地方政府;只有波特兰禁止其使用由私营部门实体。
Regulation in the US, under the 2019 Algorithmic Accountability Act, could require companies to monitor and repair "discriminatory algorithms."
根据 2019 年算法问责法,美国的监管可能要求公司监控和修复“歧视性算法”。
The European Commission last month announced its proposal of new regulations to ban “AI systems considered a clear threat to the safety, livelihoods, and rights of people."
欧洲委员会上个月宣布了新的法规提案,禁止“被视为对人们的安全、生计和权利构成明显威胁的人工智能系统。”
This would include the use of facial-recognition technologies for indiscriminate surveillance, as well as algorithms used for "social scoring" and recruitment .
这将包括使用面部识别技术进行无差别监控,以及用于"社会评分"和招聘的算法。
If AI is developed by a diverse group of engineers, then it should counteract possible implicit bias, says Hagos.
如果人工智能是由一个多元化的工程师团队开发的,那么它应该抵消可能的隐性偏见,哈戈斯说。
Deloitte's Ammanath still believes a lot of good can come from AI, as long as it is thoughtful.
德勤的阿曼纳斯仍然相信很多好事可以来自人工智能,只要它是深思熟虑的。
"Right now we're having the right conversations around ethics, how you protect humans and what new jobs look like," she says, noting that three years ago there were fewer such discussions.
“现在我们正在进行关于伦理的正确对话,如何保护人类以及新工作的样子,”她说,并指出三年前这样的讨论较少。
There are also ways of solving bias in data using synthetic or artificially created data.
还有一些方法可以通过合成或人工创建的数据来解决数据中的偏差。
"One way of applying synthetic data would be to identify data that is flawed (racist or homophobic) and replace the flawed elements with "clean' data,” says Splunk's Palmer.
“一种应用合成数据的方法是识别有缺陷的数据(种族主义或恐同)并且用“干净的数据”替换这些有缺陷的元素,”Splunk 的Palmer说。
"Doing so would allow for [machine learning] models to learn based upon desired inputs versus flawed inputs. Such an approach would allow for creation of models purposefully designed for desired outcomes. For example, if a credit grantor wanted to create a model designed for racial equity versus racial equality, they could offer better loan rates and improved credit card offers to a targeted group."
“这样做将允许机器学习模型根据期望输入与有缺陷的输入进行学习。这种方法将允许创建专门设计用于期望结果的模型。例如,如果一个信贷授予者想要创建一个旨在实现种族公平而非种族平等的模型,他们可以向一个目标群体提供更好的贷款利率和改进的信用卡优惠。
At the bare minimum, says Mitek's Ritter, a public debate is needed around the ways in which AI is being used.
至少,Mitek 的 Ritter 表示,需要就人工智能的使用方式进行公开辩论。
"What I'd like to see is more clear-cut rules and frameworks to avoid bad outcomes," he says.
“我希望看到的是更明确的规则和框架,以避免不良结果,”他说。
"I'd like to see governments come in and provide a framework for how we move forward. I'm excited to see what the next 10 years brings. If we can avoid some of the big mistakes, that will make the technology much better."
“我想看到各国政府介入并提供一个框架来指导我们如何向前发展。我很兴奋想要看看未来10年会带来什么。如果我们能避免一些重大错误,那将使技术变得更好。”
"For us to get more comfortable with Al, we need to have more transparency."
“为了让我们对人工智能感到更舒适,我们需要有更多透明度。”
Sources:
来源:
Report (A): Dylan Jackson (2021). How Big Law Firms Are Using AI To Cut Down on Hiring Biases. 5 August. New York Law Journal, p.1, col.5, Volume 266; Issue 25. Copyright 2021 LexisNexis. All Rights Reserved.
报告 (A):迪伦·杰克逊 (2021). 大型法律事务所如何利用人工智能来减少招聘偏见. 8 月 5 日. 纽约法律期刊, 第 1 页,第 5 列, 第 266 卷;第 25 期. 版权2021LexisNexis. 保留所有权利.
Report (B): Avi Asher-Schapiro (2021). How racist robots are being used in recruitment. 7 June, 20:22, Independent Online. ©2021. Independent Digital News and Media Ltd. All Rights Reserved.
报告 (B):阿维阿舍尔-沙皮罗 (2021). 种族主义机器人在招聘中的使用情况。2021 年 6 月 7 日,20:22,独立报在线。©2021.独立数字新闻与媒体有限公司。版权所有。
Report (C): Anita Hawser (2021). The Unseen Assistant. Global Finance, 14-16, 18, Volume 35; Issue 5. ©2021 Global Finance. All Rights Reserved.
报告 (C):安妮塔·霍泽 (2021)。看不见的助手。全球金融,14-16,18,第 35 卷;第 5 期。©2021 全球金融。保留所有权利。
Guiding Questions:
指导性问题:
What are the advantages & disadvantages of using machine learning (ML)/artificial intelligence (AI) for recruiting and screening legal-practice job candidates for further in-depth assessment?
使用机器学习(ML)/人工智能(AI)进行招聘和筛选法律实践职位候选人以进行更深入评估的优缺点是什么?
What should be done to minimize, overcome, address, avoid, and/or mitigate the discriminatory biases that may arise from using ML/AI to recruit & screen legal-practice job applicants?
应该采取什么措施来最小化、克服、解决、避免和/或减轻使用机器学习/人工智能进行 招聘 & 筛选 法律职业 求职者的歧视性偏见?
What are the minimum applicant qualifications/work-related personal characteristics you would set for recruiting and shortlisting legal-practice applicants for further in-depth assessment?
您会为招聘和筛选法律实践申请者进行进一步深入评估设定哪些最低申请人资格/与工作相关的个人特征?
What job-related applicant information, if any, can you ask for and/or glean from the following applicant assessment methods to help you shortlist legal-practice applicants for further in-depth assessment?
您可以从以下申请人评估方法中询问和/或获取哪些与工作相关的申请人信息,以帮助您筛选法律实践申请人以便进行进一步深入评估?
Application form
申请表
Psychometric testing
心理测验中
Recruitment/Preliminary Interview
招聘/初步面试
Self-presentation video
自我介绍 视频
Referees’ letters
参考信件
As a recruiter, what would you do to ensure that potential applicants of legal-practice positions in your organization receive a proper realistic job preview (i.e., a balanced view of both the positive and negative aspects of the job & organization in question) before filing their job applications?
作为招聘人员,您会如何确保潜在的申请者在您的组织中法律实践职位获得适当的现实工作预览(即,对该职位和组织的正面和负面方面的平衡看法)在提交他们的工作申请之前?
Please be specific and justify your answers.
请具体说明并证明你的答案。