Zum Inhalt der Seite gehen

Suche

Beiträge, die mit darpa getaggt sind


Structure of the U.S. “industrial-censorship complex”

Bild/Foto

In January 2017, Obama's outgoing Homeland Security Secretary Jay Johnson made protecting election infrastructure part of his agency's mandate.

And right behind that:

The Department of Homeland Security created a Foreign Influence Task Force to focus on “disinformation about election infrastructure.”

The State Department's Center for Global Engagement expanded its interagency mandate to counter foreign influence operations.

The FBI created a Foreign Influence Task Force to “identify and counter malicious foreign influence operations directed against the United States,” with a particular focus on voting and elections.

These were key components of what later became known as the censorship industrial complex.

In 2018, the U.S. Senate Intelligence Committee requested a “Study on Russian Interference in Social Media,” a study that has become the rationale for pressuring social media management companies to stop being indecisive about content moderation.

The committee also commissioned Graphika, a social media analytics firm, to co-author a report on Russian interference in social media. Interestingly, Graphika cites DARPA and the Pentagon's Minerva initiative, which funds “basic social science research,” as its key partners. And Graphika's report “on Russian interference in social media” was the rationale behind the creation of the Stanford Internet Observatory-led “Election Integrity Partnership” - a key element of the government's censorship police during and after the 2020 election.

The Atlantic Council's Digital Forensics Research Lab is the organization that joined the Stanford-led quartet. Partially funded by the State Department - including through the Center for Global Engagement - and the Department of Energy, the think tank counts among its directors CIA executives and defense secretaries. The lab's senior director is Graham Bouki, former top aide to President Obama for cybersecurity, counterterrorism, intelligence and homeland security.

The third of four organizations to later join the Election Integrity Partnership was the University of Washington's 2019 Center for an Informed Public. Stanford alumna and visiting professor Kate Starbird co-founded the Center. The National Science Foundation and the Office of Naval Research provided funding for Dr. Starbird's social media work. The Observatory is a program of Stanford University's Center for Cyber Policy, which includes former Obama National Security Council staffer and Russian Ambassador Michael McFaul, as well as other prominent individuals with security backgrounds or affiliations .

In the run-up to the 2020 election, the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA), which took on the task of protecting election infrastructure, expanded its scope to include combating disinformation perceived as a threat to election security. This eventually came to encompass any political speech by Americans, including speculation and even satire to the extent that it questioned or undermined state-approved narratives about unprecedented mass mail-in elections.


#USA #US #american #censorship #CIA #FBI #Pentagon #DARPA #Stanford #CISA #deepstate


"In the context of unprecedented U.S. Department of Defense (DoD) budgets, this paper examines the recent history of DoD funding for academic research in algorithmically based warfighting. We draw from a corpus of DoD grant solicitations from 2007 to 2023, focusing on those addressed to researchers in the field of artificial intelligence (AI). Considering the implications of DoD funding for academic research, the paper proceeds through three analytic sections. In the first, we offer a critical examination of the distinction between basic and applied research, showing how funding calls framed as basic research nonetheless enlist researchers in a war fighting agenda. In the second, we offer a diachronic analysis of the corpus, showing how a 'one small problem' caveat, in which affirmation of progress in military technologies is qualified by acknowledgement of outstanding problems, becomes justification for additional investments in research. We close with an analysis of DoD aspirations based on a subset of Defense Advanced Research Projects Agency (DARPA) grant solicitations for the use of AI in battlefield applications. Taken together, we argue that grant solicitations work as a vehicle for the mutual enlistment of DoD funding agencies and the academic AI research community in setting research agendas. The trope of basic research in this context offers shelter from significant moral questions that military applications of one's research would raise, by obscuring the connections that implicate researchers in U.S. militarism."

https://arxiv.org/abs/2411.17840

#AI #DoD #USA #AIWarfare #MilitaryAI #DARPA