您的位置:首页 > 其它

颜色 透明度 算法_通过问责制和透明度减少算法偏差

2020-08-23 21:57 507 查看

颜色 透明度 算法

Despite being a mathematician’s dream word, algorithms — or sets of instructions that humans or, most commonly, computers execute — have cemented themselves as an integral part of our daily lives.

d espite是一个数学家的梦想的话,一个lgorithms -或指令集对人和,最常见的是计算机上执行-巩固了自己作为我们日常生活中不可或缺的一部分。

They are working behind the scenes when we search the web, read the news, discover new music or books, apply for health insurance, and search for a date. To put it simply, algorithms are a way to automate routine or information-heavy tasks.

当我们搜索网络,阅读新闻,发现新音乐或新书,申请健康保险以及寻找约会时,他们正在幕后工作。 简而言之,算法是一种自动执行例行或信息繁重任务的方法。

However, some “routine” tasks have serious implications, such as determining credit scores, cultural or technical “fit” for a job, or the perceived level of criminal risk. While these algorithms are largely designed with society’s benefit in mind, algorithms are mathematical or logical models meant to reflect reality — which is often more nuanced than can be captured in a model.

但是,某些“例行”任务会产生严重影响,例如确定信用评分,文化或技术上“适合”工作或所感知的犯罪风险水平。 尽管这些算法在设计时就充分考虑了社会利益,但算法是旨在反映现实的数学或逻辑模型-与模型中所捕获的相比,它们通常更细微。

For instance, some students aren’t eligible for loans because a lending model deems them too risky by virtue of their zip codes; which can result in an endless spiral of education and poverty challenges.

例如, 有些学生没有资格获得贷款,因为借贷模型认为他们的邮政编码风险太大。 这可能导致教育和贫困挑战的无休止螺旋式上升。

Algorithms can be incredibly helpful for society by improving human services, reducing errors, and identifying potential threats. However, algorithms are built by humans and thus reflect their creators’ imperfections and biases.

通过改善人员服务,减少错误并识别潜在威胁,算法可以为社会带来极大的帮助。 但是,算法是人为构建的,因此反映了其创建者的不完善之处和偏见。

To ensure algorithms help society and do not discriminate, disparage, or perpetuate hate, we as a society, need to be more transparent and accountable in how our algorithms are designed and developed. Considering the importance of algorithms in our daily lives, here, a few examples of biased algorithms and how we can improve algorithm accountability.

为了确保算法对社会有所帮助,并且不会歧视,贬低或永久性地消除仇恨,作为一个社会,我们需要在设计和开发算法时更加透明和负责。 考虑到算法在我们日常生活中的重要性,这里有一些偏见算法的示例以及我们如何改善算法责任制。

计算机如何学习偏见 (How computers learn biases)

Much has been written on how humans’ cognitive biases influence everyday decisions. Humans use biases to reduce mental burden, often without cognitive awareness. For instance, we tend to think that the likelihood of an event is proportional to the ease with which we can recall an example of it happening. So if someone decides to continue smoking based on knowing a smoker who lived to be 100 despite significant evidence demonstrating the harms of smoking, that person is using what is called the availability bias.

关于人类的认知偏差如何影响日常决策的文章很多。 人类通常会在没有认知意识的情况下使用偏见来减轻精神负担。 例如,我们倾向于认为某个事件的可能性与我们回想事件发生的难易程度成正比。 因此,如果某人在知道有充分证据表明吸烟的危害后仍知道仍然活着100岁的吸烟者而决定继续吸烟,则该人正在使用所谓的可用性偏差

Humans have trained computers to take over routine tasks for decades. Initially, these tasks were for very simple tasks, such as calculating a large set of numbers. As the computer and data science fields have expanded exponentially, computers are being asked to take on more nuanced problems through new tools (e.g., machine learning). Over time, researchers have found that algorithms often replicate and even amplify the prejudices of those who create them.

数十年来,人类已经训练计算机接管了日常任务。 最初,这些任务仅用于非常简单的任务,例如计算大量数字。 随着计算机和数据科学领域的发展Swift,人们要求计算机通过新工具(例如机器学习)来处理更细微的问题。 随着时间的流逝,研究人员发现算法经常会复制甚至放大创建它们的人的偏见

Since algorithms require humans to define exhaustive, step-by-step instructions, the inherent perspectives and assumptions can unintentionally build in bias. In addition to bias in development, algorithms can be biased if they are trained on incomplete or unrepresentative training data. Common facial recognition training datasets, for example, are 75% male and 80% white, which leads them to demonstrate both skin-type and gender biases, resulting in higher error rates and misclassification.

由于算法要求人员定义详尽的分步指令,因此固有的观点和假设可能会无意间造成偏差。 除了在开发中存在偏见之外,如果对算法进行不完全或不具有代表性的训练数据进行训练,则可能会对算法产生偏见。 例如,常见的面部识别训练数据集是75%的男性和80%的白人 ,这使他们表现出皮肤类型和性别偏见 ,从而导致更高的错误率和错误分类。

On a singular level, a biased algorithm can negatively impact a human life significantly (e.g., increasing the prison time based on race). When spread across an entire population, inequalities are magnified and have lasting effects on certain populations. Here are a few examples.

在单一层面上,有偏见的算法可能会对人类生活产生重大负面影响(例如,增加基于种族的入狱时间)。 当不平等现象扩大到整个人口时,就会加剧不平等现象,并对某些人口产生持久影响。 这里有一些例子。

搜索和显示信息 (Searching and displaying information)

Google, one of the most well-known companies in the world, shapes how millions of people find and interact with information through search algorithms. For many years, Googling “Black girls” would yield sexualized and even pornographic search results. Google’s engineers are largely male and white, and their biases and viewpoints may be unintentionally (or intentionally) reflected in the algorithms they build.

Google是世界上最知名的公司之一,它塑造着数百万人如何通过搜索算法查找信息并与之交互的方式。 多年来, 谷歌搜索“黑人女孩”会产生色情甚至色情的搜索结果。 Google的工程师大多是男性和白人,他们的偏见和观点可能无意(或有意)反映在他们构建的算法中。

This illustrates the consequences of unquestioningly trusting algorithms and demonstrates how data discrimination is a real problem. By 2016, after drawing widespread attention, Google had also modified the algorithm to include more diverse images of Black girls in its image search results.

这说明了毫无疑问地信任算法的后果,并说明了数据歧视是一个真正的问题。 到2016年,在引起广泛关注之后,谷歌还修改了该算法,在其图像搜索结果中包含了更多黑人女孩的图像。

招聘中 (Recruiting)

Many companies use machine learning algorithms to scan resumes and make suggestions to hiring managers. Amazon scrapped an internal machine learning recruiting engine after realizing it favored men’s resumes. To train the system, they had used resumes of current and previous employees over ten years in order to identify patterns, however this meant that most of the resumes came from men. Had the model not been questioned and reviewed, it would have only exacerbated Amazon’s penchant for male dominance.

许多公司使用机器学习算法来扫描简历并向招聘经理提出建议。 在意识到亚马逊偏爱男性简历后,亚马逊取消了内部机器学习招聘引擎。 为了培训该系统,他们使用了十年以上的现有和以前雇员的简历来识别模式,但这意味着大多数简历都来自男性。 如果不对模型进行质疑和审查,那只会加剧亚马逊对男性主导地位的偏爱。

In addition to gender bias, tech companies are known for low levels of diversity and racist hiring practices. Blacks and Latinos are increasingly graduating from college with computer science degrees, but they are still underemployed.

除了性别偏见外,科技公司还以多样性水平低和种族主义聘用做法而闻名。 黑人和拉丁裔越来越多地从计算机科学学位的大学毕业,但他们的就业仍然不足

Hiring trends based on these biases exacerbate white privilege and discriminate against people of color.

基于这些偏见的招聘趋势会加剧白人特权,并歧视有色人种。

卫生保健 (Healthcare)

The U.S. healthcare system uses commercial algorithms to guide health decisions, and algorithms help doctors identify and treat patients with complex health needs. A good example of this would be the CHADS-VASC atrial fibrillation risk score calculator, which estimates the risk of developing a stroke for patients with atrial fibrillation and helps guide preventative treatment.

美国医疗保健系统使用商业算法来指导健康决策,而算法则可以帮助医生识别和治疗具有复杂健康需求的患者。 CHADS-VASC房颤风险评分计算器就是一个很好的例子,它可以估算房颤患者发生中风的风险,并有助于指导预防性治疗。

However, Science published a study in which researchers found “significant racial bias” in one of these widely used algorithms, resulting in consistently and dramatically underestimating black patients’ healthcare needs. Practitioners use this algorithm to identify patients for “high-risk care management” programs, which seek to improve the care of patients with complex health needs by providing additional resources, greater attention from trained providers, and more coordinated care.

但是,《 科学》杂志发表了一项研究 ,研究人员在这些被广泛使用的算法中发现了“明显的种族偏见”,从而导致对黑人患者的医疗保健需求的一致且严重低估。 从业者使用此算法为“高风险护理管理”计划识别患者,该计划通过提供更多资源,训练有素的提供者更多的关注以及更加协调的护理来寻求改善具有复杂健康需求的患者的护理。

The algorithm uses healthcare costs as a proxy for health needs, when more accurate variables like “active chronic conditions” would be more accurate.

当更准确的变量(如“慢性活动状态”)更准确时,该算法将医疗保健费用用作满足健康需求的代理。

Without the algorithm’s bias, the percentage of Black patients receiving extra healthcare services would jump from 17.7% to 46.5%, which would likely improve their health and recovery rates.

如果没有算法的偏见,获得额外医疗服务的黑人患者比例将从17.7%跃升至46.5%,这可能会改善他们的健康和康复率。

警务 (Policing)

From arrest through bail, trial and sentencing, algorithmic inequality shows up. Police in Detroit recently falsely arrested Robert Julian-Borchak Williams based on facial recognition software, who was detained for 30 hours and interrogated for a crime someone else committed. Ultimately the charges were dropped due to insufficient evidence, but this marks the beginning of an uncertain chapter. Joy Buolamwini, an MIT researcher and founder of the Algorithmic Justice League, noted

从逮捕到保释,审判和判刑,都显示出算法上的不平等。 底特律的警察最近基于面部识别软件错误地逮捕了罗伯特·朱利安·伯查克·威廉姆斯(Robert Julian-Borchak Williams),他被拘留了30小时,并因其他人的罪行受到讯问。 最终,由于证据不足,指控被撤销,但这标志着不确定的一章的开始。 麻省理工学院的研究员, 算法正义联盟的创始人Joy Buolamwini指出

“The threats to civil liberties posed by mass surveillance are too high a price. You cannot erase the experience of 30 hours detained, the memories of children seeing their father arrested, or the stigma of being labeled criminal.”

“大规模监视对公民自由的威胁代价太大。 您无法消除被拘留30个小时的经历,孩子们看到父亲被捕的记忆或被标记为犯罪分子的烙印。”

Algorithms inform decisions around granting or denying bail and handing out sentences. They help assign a reoffense risk score to determine whether to assign additional police resources to ‘high-risk’ individuals. Additionally, ‘hot spot policing’ uses machine learning to analyze crime data and determine where to concentrate police patrols at different times of the day and night.

算法会通知有关授予或拒绝保释以及分发句子的决策。 他们帮助分配重新进攻风险评分,以确定是否向“高风险”人员分配额外的警察资源。 此外,“热点警务”使用机器学习来分析犯罪数据并确定在白天和晚上的不同时间将警察巡逻集中到哪里。

The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used by judges to predict whether defendants should be detained or released on bail pending trial, was found to be negatively biased against African-Americans. Using arrest records, defendant demographics, and other variables, the algorithm assigns a risk score to a defendant’s likelihood to commit a future offense. Compared to whites who were equally likely to re-offend, African-Americans were more likely to be assigned a higher-risk score and spent longer periods of time in detention while awaiting trial.

法官用来预测被告是否应被拘留或在保释候审中保释的COMPAS(替代性制裁的更正罪犯管理概况)算法被发现对非裔美国人负面偏见 。 该算法使用逮捕记录,被告人的人口统计信息和其他变量,将风险评分分配给被告将来实施犯罪的可能性。 与同样有可能再次犯罪的白人相比,非洲裔美国人更有可能被分配较高的分数,并在等待审判期间被拘留更长的时间。

并非都不好:大型技术的初步进展 (It’s not all bad: Initial progress in big tech)

Following the controversial and highly publicized death of George Floyd, the significant racial inequalities in the United States are beginning to be acknowledged broadly across America.

˚Following的争议,大张旗鼓地宣传死亡乔治·弗洛伊德 ,在美国显著的种族不平等现象开始被广泛全美认可。

In light of civil unrest, large tech companies are beginning to respond. IBM announced it is stopping all facial recognition work, and Amazon paused selling its facial recognition tool to law enforcement for one year. Microsoft President Brad Smith announced the company would not sell facial recognition to police “until we have a national law in place, grounded in human rights, that will govern this technology.”

鉴于内乱,大型科技公司开始做出回应。 IBM宣布将停止所有面部识别工作,而亚马逊暂停了将面部识别工具出售给执法部门的时间为一年。 微软总裁布拉德·史密斯(Brad Smith) 宣布,该公司不会向警察出售面部识别功能,“直到我们制定了以人权为基础的国家法律来管理这项技术。”

Other companies are also taking things into their own hands. Six Los Angeles tech companies share how they are taking action, from fostering and elevating the important dialogue around race in America, implementing more assertive diversity hiring and recruitment practices, providing additional mental health services, and donating to organizations fighting for racial equality.

其他公司也将事情掌握在自己手中。 六家洛杉矶科技公司分享了他们如何采取行动 ,从促进和提升围绕美国种族的重要对话,实施更具自信的多样性招聘和招聘实践,提供更多的心理健康服务以及向争取种族平等的组织捐款。

The Stop Hate for Profit campaign is an excellent example of how the public can pressure tech companies to move away from “neutrality” that is in fact biased. Netflix launched a Black Lives Matter collection, which is a great example of amplifying creative black voices.

制止仇恨牟利”运动是一个很好的例子,说明了公众如何向科技公司施加压力,使其摆脱实际上存在偏见的“中立性”。 Netflix推出了Black Lives Matter系列 ,这是放大创意黑色声音的一个很好的例子。

我们从这里去哪里? (Where do we go from here?)

In many situations, algorithms make our lives easier. However, algorithms can also create biases that disproportionately affect certain populations.

在许多情况下,算法使我们的生活更轻松。 但是,算法也会产生不成比例地影响特定人群的偏见。

In order to continually improve how algorithms support society, we need to demand more accountability and transparency. Vox, for example, shared an algorithmic bill of rights to protect people from risks that AI is introducing into their lives.

为了不断改善算法如何支持社会,我们需要更多的责任感和透明度。 例如,Vox共享了一项算法权利法案,以保护人们免受AI正在给他们的生活带来的风险。

Citizens have a right to know how and when an algorithm is making a decision that affects them, as well as the factors and data it is using to come to that decision. The Association for Computing Machinery developed transparency and accountability principles for algorithms as well. We can support these organizations in raising awareness as well as advocating and lobbying for more transparency and accountability from companies as well as the government.

公民有权知道算法如何以及何时做出影响他们的决策,以及知道该决策所使用的因素和数据。 计算机协会也为算法制定了透明度和责任制原则 。 我们可以支持这些组织提高认识,并倡导和游说公司和政府提高透明度和问责制。

Finally, tech companies should become more inclusive and diverse so their teams are more representative of the population they serve.

最后,科技公司应该变得更具包容性和多样性,这样他们的团队才能更好地代表他们所服务的人群。

关于作者 (About the Authors)

Meghan is a Senior UX Researcher at 15Five, an employee engagement platform focused on creating highly-engaged, high performing organizations by helping people become their best selves. Before joining 15Five, Meghan founded the UX Research team at Factual, a startup focused on building tools to make data more accessible and actionable. Meghan writes UX Research focused content on Medium, as well as education, mindfulness, and neuroscience books and research briefs with The Center for Educational Improvement.

Meghan15Five的高级用户体验研究员,该平台是一个员工敬业度平台,致力于通过帮助人们成为自己的最佳自我来创建高度敬业的高性能组织。 在加入15Five之前,梅根(Meghan)在Factual成立了UX研究团队,这是一家致力于构建工具以使数据更易于访问和可操作的初创公司。 梅根(Meghan)在教育改善中心(The Center for Educational Improvement)撰写的《 UX Research》 ,重点介绍了有关媒体 ,教育,正念,神经科学的书籍和研究摘要的内容

Jared is the CEO of PwrdBy, speaker, and a published author. PwrdBy empowers nonprofits to fundraise smarter through artificial intelligence apps such as Amelia and NeonMoves. Before joining PwrdBy, Jared was a Senior Consultant in Deloitte’s Sustainability practice with experience working with Fortune 500 companies to design social and environmental sustainability strategies. He is a Lean Six Sigma Black Belt.

贾里德(Jared)PwrdBy的首席执行官,发言人,也是一位已发表的作者。 PwrdBy支持非营利组织通过诸如AmeliaNeonMoves之类的人工智能应用更明智地筹款。 在加入PwrdBy之前,Jared是德勤可持续发展业务的高级顾问,具有与财富500强公司合作设计社会和环境可持续性战略的经验。 他是精瘦的六西格玛黑带。

翻译自: https://medium.com/swlh/reducing-algorithmic-bias-through-accountability-and-transparency-b7dc210df678

颜色 透明度 算法

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐