writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

321
active users

#aidiscrimination

0 posts0 participants0 posts today
Miguel Afonso Caetano<p>"We, the undersigned researchers, affirm the scientific consensus that artificial intelligence (AI) can exacerbate bias and discrimination in society, and that governments need to enact appropriate guardrails and governance in order to identify and mitigate these harms. [1]</p><p>Over the past decade, thousands of scientific studies have shown how biased AI systems can violate civil and human rights, even if their users and creators are well-intentioned. [2] When AI systems perpetuate discrimination, their errors make our societies less just and fair. Researchers have observed this same pattern across many fields, including computer science, the social sciences, law, and the humanities. Yet while scientists agree on the common problem of bias in AI, the solutions to this problem are an area of ongoing research, innovation, and policy.</p><p>These facts have been a basis for bipartisan and global policymaking for nearly a decade. [3] We urge policymakers to continue to develop public policy that is rooted in and builds on this scientific consensus, rather than discarding the bipartisan and global progress made thus far."</p><p><a href="https://www.aibiasconsensus.org/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">aibiasconsensus.org/</span><span class="invisible"></span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/AIBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIBias</span></a> <a href="https://tldr.nettime.org/tags/AIDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIDiscrimination</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/HR" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HR</span></a> <a href="https://tldr.nettime.org/tags/USA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>USA</span></a> <a href="https://tldr.nettime.org/tags/CivilRights" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CivilRights</span></a> <a href="https://tldr.nettime.org/tags/AIDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIDiscrimination</span></a>: "The American Civil Liberties Union alleged in a complaint to regulators that a large consulting firm is selling AI-powered hiring tools that discriminate against job candidates on the basis of disability and race, despite marketing these services to businesses as “bias free.”</p><p>Aon Consulting, Inc., a firm that works with Fortune 500 companies and sells a mix of applicant screening software, has made false or misleading claims that its tools are “fair,” free of bias and can “increase diversity,” the ACLU alleged in a complaint to the US Federal Trade Commission on Wednesday, a copy of which was reviewed by Bloomberg.</p><p>In its complaint, the ACLU said Aon’s algorithmically driven personality test, ADEPT-15, relies on questions that adversely impact autistic and neurodivergent people, as well as people with mental health disabilities. Aon also offers an AI-infused video interviewing system and a gamified cognitive assessment service that are likely to discriminate based on race and disability, according to the complaint.</p><p>The ACLU is calling on the FTC to open an investigation into Aon’s practices, issue an injunction and provide other necessary relief to affected parties."</p><p><a href="https://www.bloomberg.com/news/articles/2024-05-30/aclu-says-in-ftc-complaint-that-aon-s-ai-tools-discriminatory?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxNzA4NjA3NiwiZXhwIjoxNzE3NjkwODc2LCJhcnRpY2xlSWQiOiJTRUIzVFREV0xVNjgwMCIsImJjb25uZWN0SWQiOiI2NDU1MEM3NkRFMkU0QkM1OEI0OTI5QjBDQkIzRDlCRCJ9.AWyt7jfDrfGmHZJdUJk_kw5Peo4apyNWckugYuh1Xng" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bloomberg.com/news/articles/20</span><span class="invisible">24-05-30/aclu-says-in-ftc-complaint-that-aon-s-ai-tools-discriminatory?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxNzA4NjA3NiwiZXhwIjoxNzE3NjkwODc2LCJhcnRpY2xlSWQiOiJTRUIzVFREV0xVNjgwMCIsImJjb25uZWN0SWQiOiI2NDU1MEM3NkRFMkU0QkM1OEI0OTI5QjBDQkIzRDlCRCJ9.AWyt7jfDrfGmHZJdUJk_kw5Peo4apyNWckugYuh1Xng</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/UK" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>UK</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/AIBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIBias</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/Fraud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fraud</span></a> <a href="https://tldr.nettime.org/tags/PublicSector" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PublicSector</span></a> <a href="https://tldr.nettime.org/tags/AIDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIDiscrimination</span></a>: "The DWP has been using AI to help detect benefits fraud since 2021. The algorithm detects cases that are worthy of further investigation by a human and passes them on for review.</p><p>In response to a freedom of information request by the Guardian, the DWP said it could not reveal details of how the algorithm works in case it helps people game the system.</p><p>The department said the algorithm does not take nationality into account. But because these algorithms are self-learning, no one can know exactly how they do balance the data they receive.</p><p>The DWP said in its latest annual accounts that it monitored the system for signs of bias, but was limited in its capacity to do so where it had insufficient user data. The public spending watchdog has urged it to publish summaries of any internal equality assessments."</p><p><a href="https://www.theguardian.com/technology/2023/oct/23/uk-risks-scandal-over-bias-in-ai-tools-in-use-across-public-sector?CMP=share_btn_tw" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/technology/202</span><span class="invisible">3/oct/23/uk-risks-scandal-over-bias-in-ai-tools-in-use-across-public-sector?CMP=share_btn_tw</span></a></p>