The US government has tried to sway online conversation behind closed doors and through pressure on commercial sites. Leaked documents exposed a DHS plan to police disinformation.
The Department of Homeland Security is covertly expanding its efforts to stifle speech that it deems threatening, according to a report by The Intercept. Years’ worth of internal DHS memos, emails, and documents that have been made public as well as those that have been obtained through leaks and current litigation show the agency’s extensive efforts to influence tech platforms.
Most of the work, much of which is still unknown to the American public, became apparent earlier this year when DHS announced a new “Disinformation Governance Board”: a panel intended to police misinformation (false information spread inadvertantly), disinformation (false information spread intentionally), and malinformation (factual information shared, usually out of context, with harmful intent) that supposedly jeopardizes U.S. interests. Although the board was widely mocked, promptly scaled back, and afterwards forced to close within a few months, new initiatives are ongoing as the DHS shifts its focus to social media surveillance now that its initial goal – the war on terror — has been completed.
The US government has tried to sway online conversation behind closed doors and through pressure on commercial sites. Meeting minutes (read below) and other documents attached to a lawsuit filed by Republican Missouri Attorney General Eric Schmitt, who is also running for the Senate, show that discussions have covered a variety of topics, from the extent of government involvement in online discourse to the logistics of streamlining takedown requests for false or purposefully misleading information.
“Platforms have got to get comfortable with gov’t. It’s really interesting how hesitant they remain,” Microsoft executive Matt Masterson, a former DHS official, texted Jen Easterly, a DHS director, in February.
Subscribe to GreatGameIndia
An FBI agent named Laura Dehmlow cautioned attendees at the meeting in March that the threat of disinformation on social media may erode support for the American government. According to the notes of the meeting that senior officials from Twitter and JPMorgan Chase attended, Dehmlow emphasized that “we need a media infrastructure that is held accountable.”
“We do not coordinate with other entities when making content moderation decisions, and we independently evaluate content in line with the Twitter Rules,” a spokesperson for Twitter wrote in a statement to The Intercept.
There is also an established method in place for government officials to report content on Facebook or Instagram and request that it be throttled or censored via a specific Facebook site that requires a government or law enforcement email address to access. The “content request system” at facebook.com/xtakedowns/login is still operational as of this writing.
The Department of Homeland Security’s goal to combat disinformation, which arose from worries about Russian meddling in the 2016 presidential election, began to take shape during the 2020 election and attempts to sway discussions about vaccine policy during the coronavirus outbreak. The Intercept obtained documents from a range of sources, including current officials and publicly available reports, that demonstrate the emergence of more active DHS tactics.
According to a manuscript of DHS’s Quadrennial Homeland Security Review, the department’s capstone report highlighting the department’s tactic and preferences in the years ahead, the department plans to target “inaccurate information” on a vast scope of subjects, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”
“The challenge is particularly acute in marginalized communities,” the report states, “which are often the targets of false or misleading information, such as false information on voting procedures targeting people of color.”
Considering that House Republicans have pledged to conduct an investigation should they win a majority in the Midterm elections, the inclusion of the U.S. pullout from Afghanistan in 2021 is especially noteworthy. Rep. Mike Johnson, R-La., a member of the Armed Services Committee, stated that seeking answers “will be a top priority” and that “this makes Benghazi look like a much smaller issue.”
The government’s definition of disinformation has not been clearly specified, and the fundamentally subjective aspect of what comprises disinformation allows DHS officers to make politically motivated judgements about what constitutes hazardous speech.
DHS justifies these goals, which have gone far beyond its original focus on foreign threats to include domestic disinformation, by saying that terrorist risks can be “exacerbated by misinformation and disinformation spread online.” However, the noble purpose of safeguarding Americans from harm has frequently been utilized to cover political scheming. According to former DHS Secretary Tom Ridge, in 2004, DHS employees were under pressure from the George W. Bush administration to raise the national threat level for terrorism in order to influence voters ahead of the election. The United States has consistently misled about a wide range of problems, from the roots of its wars in Vietnam and Iraq to the involvement of the National Institutes of Health in sponsoring the Wuhan Institute of Virology’s coronavirus research.
Despite its past, the U.S. government continues to try to decide what information on fundamentally political subjects qualifies as false or dangerous. The “Stop WOKE Act,” which was signed by Republican Gov. Ron DeSantis earlier this year, prohibits private employers from conducting workplace trainings that claim a person’s moral character is privileged or oppressed because of his or her race, color, sex, or national origin. Critics claimed that the regulation amounted to a widespread repression of inflammatory speech. Since then, FIRE—the Foundation for Individual Rights and Expression—has sued DeSantis, charging him with “unconstitutional censorship.” Portions of the Stop WOKE Act were temporarily suspended by a federal judge who ruled that the legislation had infringed employees’ First Amendment rights.
“Florida’s legislators may well find plaintiffs’ speech ‘repugnant.’ But under our constitutional scheme, the ‘remedy’ for repugnant speech is more speech, not enforced silence,” wrote Judge Mark Walker, in a colorful opinion castigating the law.
It is unclear how much the DHS programs affect Americans’ regular social feeds. The government reported several posts as worrisome during the 2020 election, many of which were then removed, according to papers cited in the Missouri attorney general’s lawsuit. According to a 2021 analysis by Stanford University’s Election Integrity Partnership, of roughly 4,800 flagged items,, technology platforms took action on 35% of them, either removing, labeling, or soft-blocking speech, which means users could only view content after bypassing a warning screen. The study was conducted “in consultation with CISA,” which stands for Cybersecurity and Infrastructure Security Agency.
Prior to the 2020 election, tech companies such as Twitter, Facebook, Reddit, Discord, Wikipedia, Microsoft, LinkedIn, and Verizon Media met with the FBI, CISA, and other government authorities on a monthly basis. According to NBC News, the discussions were part of an ongoing endeavor between the corporate sector and the government to determine how businesses would deal with misinformation during the election.
Following high-profile hacking breaches of U.S. corporations, Congress approved and President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act, establishing a new arm of DHS focused to securing key national infrastructure. The DHS Office of Inspector General sketched the rapidly accelerating shift toward controlling disinformation in an August 2022 report.
CISA bragged about its “evolved mission” to track social media conversations while “routing disinformation concerns” to private sector platforms from the beginning.
To combat electoral disinformation, then-DHS Secretary Kirstjen Nielsen established the Countering Foreign Influence Task Force in 2018. The task force, which includes members of CISA and its Office of Intelligence and Analysis, gathered “threat intelligence” regarding the election and alerted social media sites and law enforcement. Simultaneously, the DHS began informing social media firms about voting-related disinformation that appeared on their platforms.
According to the inspector general’s report, DHS established a separate entity dubbed the Foreign Influence and Interference Branch in 2019 to generate more specific intelligence concerning disinformation. That year, its personnel expanded to include 15 full- and part-time misinformation analysts. According to Acting Secretary Chad Wolf’s Homeland Threat Assessment, the disinformation focus will expand to encompass Covid-19 in 2020.
This apparatus was put through its paces during the 2020 election, when CISA began collaborating with other parts of the US intelligence community. Personnel from the Office of Intelligence and Analysis took part in “weekly teleconferences to coordinate Intelligence Community activities to counter election-related disinformation.” Meetings have continued to take place every two weeks after the elections, according to the IG report.
The procedure for such removal requests in the months before November 2020 is described in emails exchanged between DHS representatives, Twitter, and the Center for Internet Security. The digital platforms will be expected to “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible,” according to meeting notes. In actuality, this frequently meant that election authorities from the states would provide examples of possible disinformation to CISA, which would then send them to social media companies for a response.
The emphasis has remained on disinformation under President Joe Biden. The Countering Foreign Influence Task Force was superseded by the “Misinformation, Disinformation, and Malinformation” team in January 2021. This team was established “to promote more flexibility to focus on general MDM.” By this point, the effort’s focus had widened to include domestically produced disinformation in addition to that created by foreign governments. The MDM team “counters all types of disinformation, to be responsive to current events,” according to a CISA official referenced in the IG report.
Jen Easterly, the CISA director appointed by Biden, quickly stated that she would keep allocating resources within the organization to stop the spread of potentially harmful information on social media. “One could argue we’re in the business of critical infrastructure, and the most critical infrastructure is our cognitive infrastructure, so building that resilience to misinformation and disinformation, I think, is incredibly important,” said Easterly, speaking at a conference in November 2021.
CISA’s domain has gradually grown to include more subjects that it considers to be vital infrastructure. The Intercept revealed last year the existence of a succession of DHS field intelligence reports warning of cell tower attacks, which it has linked to conspiracy theorists who claim 5G antennas spread Covid-19. According to one intelligence report, these conspiracy theories “are inciting attacks against the communications infrastructure.”
CISA defended its expanding social media monitoring powers, claiming that “once CISA notified a social media platform of disinformation, the social media platform could independently decide whether to remove or modify the post.” However, as demonstrated by records obtained through the Missouri lawsuit, CISA’s purpose is to make platforms more responsive to its proposals.
Easterly texted Matthew Masterson, a Microsoft employee who previously worked at CISA, in late February that she is “trying to get us in a place where Fed can work with platforms to better understand mis/dis trends so relevant agencies can try to prebunk/debunk as useful.”
Meeting records of the CISA Cybersecurity Advisory Committee, the key subcommittee in charge of CISA’s disinformation policy, demonstrate a consistent attempt to broaden the scope of the agency’s disinformation-fighting weapons.
The same DHS advisory council for CISA, which includes Vijaya Gadde, Twitter’s head of legal policy, trust, and safety, and Kate Starbird, a professor at the University of Washington, issued a paper in June urging the agency to have a larger role in defining the “information ecosystem.” According to the report, the organization was tasked with keeping a careful eye on “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio, and other online resources.” They said that the organization needed to act to stop the “spread of false and misleading information,” with a particular focus on material that threatens “key democratic institutions, such as the courts, or by other sectors, such as the financial system, or public health measures.”
According to the paper, in order to achieve these broad goals, CISA should engage in external research to analyze the “efficacy of interventions,” notably research into how alleged disinformation may be addressed and how quickly messages propagate. CISA’s Election Security Initiative director, Geoff Hale, suggested using third-party information-sharing NGOs as a “clearing house for information to avoid the appearance of government propaganda.”
On Thursday, just after Twitter had been fully acquired by billionaire Elon Musk, Gadde was let go from the firm.
Read the document below: