https://www.technologyreview.com/2024/08/14/1096534/homeland-security-facial-recognitio... - 0 views
AI: risks to knowledge economies - 0 views
AI and economy and education synthesis - 0 views
AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views
-
Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
-
Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
-
Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
- ...9 more annotations...
I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views
-
How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
-
Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
-
fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
- ...3 more annotations...
Lack of Transparency over Police Forces' Covert Use of Predictive Policing Software Rai... - 0 views
-
Currently, through the use of blanket exemption clauses – and without any clear legislative oversight – public access to information on systems that may be being used to surveil them remains opaque. Companies including Palantir, NSO Group, QuaDream, Dark Matter and Gamma Group are all exempt from disclosure under the precedent set by the police, along with another entity, Dataminr.
-
has helped police in the US monitor and break up Black Lives Matter and Muslim rights activism through social media monitoring. Dataminr software has also been used by the Ministry of Defence, Foreign Commonwealth and Development Office, and the Cabinet Office,
-
New research shows that, far from being a ‘neutral’ observational tool, Dataminr produces results that reflect its clients’ politics, business goals and ways of operating.
- ...3 more annotations...
Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views
-
After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
-
Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
-
Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
- ...5 more annotations...
OpenAI's bot wrote my obituary. It was filled with bizarre lies. - 0 views
-
What I find so creepy about OpenAI’s bots is not that they seem to exhibit creativity; computers have been doing creative tasks such as generating original proofs in Euclidean geometry since the 1950s. It’s that I grew up with the idea of a computer as an automaton bound by its nature to follow its instructions precisely; barring a malfunction, it does exactly what its operator – and its program—tell it to do. On some level, this is still true; the bot is following its program and the instructions of its operator. But the way the program interprets the operator’s instructions are not the way the operator thinks. Computer programs are optimized not to solve problems, but instead to convince its operator that it has solved those problems. It was written on the package of the Turing test—it’s a game of imitation, of deception. For the first time, we’re forced to confront the consequences of that deception.
-
a computer program that would be sociopathic if it were alive
-
Even when it’s not supposed to, even when it has a way out, even when the truth is known to the computer and it’s easier to spit it out rather than fabricate something—the computer still lies
- ...1 more annotation...
TSA is adding face recognition at big airports. Here's how to opt out. - The Washington... - 0 views
-
Any time data gets collected somewhere, it could also be stolen — and you only get one face. The TSA says all its databases are encrypted to reduce hacking risk. But in 2019, the Department of Homeland Security disclosed that photos of travelers were taken in a data breach, accessed through the network of one of its subcontractors.
-
“What we often see with these biometric programs is they are only optional in the introductory phases — and over time we see them becoming standardized and nationalized and eventually compulsory,” said Cahn. “There is no place more coercive to ask people for their consent than an airport.”
-
Those who have the privilege of not having to worry their face will be misread can zip right through — whereas people who don’t consent to it pay a tax with their time. At that point, how voluntary is it, really?