Suche
Beiträge, die mit sts getaggt sind
"We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.
The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory."
https://knightcolumbia.org/content/ai-as-normal-technology
#AI #STS #AIPolicy #Superintelligence
The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.
The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory."
https://knightcolumbia.org/content/ai-as-normal-technology
#AI #STS #AIPolicy #Superintelligence
Beyond Fairness in Computer Vision: A Holistic Approach to
Mitigating Harms and Fostering Community-Rooted Computer
Vision Research
Timnit Gebru and Remi Denton
"ABSTRACT: The field of computer vision is now a multi-billion dollar enterprise, with its use in surveillance applications driving
this large market share. In the last six years, computer vision researchers have started to discuss the risks and harms of some of these systems, mostly using the lens of fairness introduced in the machine learning literature to perform this analysis. While this lens is useful to uncover and mitigate a narrow segment of the harms that can be enacted through computer vision systems, it is only one of the toolkits that researchers have available to uncover and mitigate the harms of the systems they build.
In this monograph, we discuss a wide range of risks and harms that can be enacted through the development and deployment of computer vision systems. We also discuss some existing technical approaches to mitigating these harms, as well as the shortcomings of these mitigation strategies.
Then, we introduce computer vision researchers to harm mitigation strategies proposed by journalists, human rights activists, individuals harmed by computer vision systems, and researchers in disciplines ranging from sociology to physics. We conclude the monograph by listing principles that researchers can follow to build what we call community rooted computer vision tools in the public interest, and give examples of such research directions. We hope that this monograph can serve as a starting point for researchers exploring the harms of current computer vision systems and attempting to steer the field into community-rooted work."
https://cdn.sanity.io/files/wc2kmxvk/revamp/79776912203edccc44f84d26abed846b9b23cb06.pdf
#ComputerVision #STS #Surveillance
Mitigating Harms and Fostering Community-Rooted Computer
Vision Research
Timnit Gebru and Remi Denton
"ABSTRACT: The field of computer vision is now a multi-billion dollar enterprise, with its use in surveillance applications driving
this large market share. In the last six years, computer vision researchers have started to discuss the risks and harms of some of these systems, mostly using the lens of fairness introduced in the machine learning literature to perform this analysis. While this lens is useful to uncover and mitigate a narrow segment of the harms that can be enacted through computer vision systems, it is only one of the toolkits that researchers have available to uncover and mitigate the harms of the systems they build.
In this monograph, we discuss a wide range of risks and harms that can be enacted through the development and deployment of computer vision systems. We also discuss some existing technical approaches to mitigating these harms, as well as the shortcomings of these mitigation strategies.
Then, we introduce computer vision researchers to harm mitigation strategies proposed by journalists, human rights activists, individuals harmed by computer vision systems, and researchers in disciplines ranging from sociology to physics. We conclude the monograph by listing principles that researchers can follow to build what we call community rooted computer vision tools in the public interest, and give examples of such research directions. We hope that this monograph can serve as a starting point for researchers exploring the harms of current computer vision systems and attempting to steer the field into community-rooted work."
https://cdn.sanity.io/files/wc2kmxvk/revamp/79776912203edccc44f84d26abed846b9b23cb06.pdf
#ComputerVision #STS #Surveillance