Since the second half of 2020, the international community has paid more attention to the issue of cyberspace governance under the influence of the new crown pneumonia epidemic. Governance goals include online disinformation involving factors such as the epidemic and elections, security issues caused by new technologies and applications, and children’s Internet. Violence and cyberbullying, and cybercriminals such as online scams. Affected by the continuous spread of the new crown pneumonia epidemic, the problems faced by global cyberspace governance in the future will become more complex, and countries need to be cautiously dealt with.
1. Countries’ governance of online disinformation that integrates factors such as the COVID-19 epidemic and elections has highlighted political overtones
In the second half of 2020, European and American countries’ policy thinking in dealing with online disinformation related to the epidemic is often to accuse other countries of affecting their own epidemic prevention and control and political security, in order to divert people’s attention from the poor effectiveness of their own epidemic governance. However, the reality is more complicated.
(1) The political characteristics of disinformation governance in European and American countries are obvious
Since 2020, the political intentions of European and American countries to govern online false information are obvious, and the commonality of their measures to govern online false information is to accuse China, Russia and other countries of spreading online false news to their own countries.
In June 2020, the European Commission released a report, “Tackling Covid-19 Disinformation: Getting the Facts Right”, saying that countries such as China and Russia had “spread falsehoods” during the Covid-19 pandemic for the purpose of “undermining democratic debate” and “raising their own image”. and misleading information”. Vera Jollova, vice-president of the EU Commission on Values and Transparency, said that false information “can not only kill citizens, but also undermine the public power’s response to the (epidemic), thereby weakening the (effect) of the measures (they) take”. This remark shows its intention to transfer the pressure of the epidemic within the EU to other countries.
In November 2020, the UK’s Government Communications Headquarters (GCHQ) said it had discovered a cybercriminal activity backed by the Russian government intended to spread fake news about a coronavirus vaccine. A few weeks before the incident, the British defense chief of staff, General Carter, had called on the United Kingdom to improve its cyber warfare capabilities to boycott China, Russia and other countries.
The Trump administration issued an executive order in May 2020 requesting the initiation of a normative clarification process for Section 230 of the Communications Decency Act, passed by Congress in 1996. Section 230 exempts social media platforms from liability, protecting them from prosecution for content posted by users. In response, the White House said in a statement that social media companies should not be exempt from liability when they edit content on their platforms or remove legitimate speech for political reasons. The statement also specifically mentioned that online social media platforms “profit from disinformation spread by foreign governments.”
(2) The global spread of epidemic-related false information brings governance threats
Unlike European and American countries, which focused on responding to the epidemic and its economic impact in the first half of 2020, the European Commission’s digital agenda has not changed, and it has turned towards addressing concerns caused by the epidemic. Although some Internet legislations have been delayed, however, In the second half of 2020, the legislative process of disinformation related to the epidemic was gradually advanced.
The European Commission, which opened a public consultation on a range of digital issues in June 2020, decided to publish the European Action Plan for Democracy by the end of the fourth quarter. The plan builds on lessons learned from the COVID-19 crisis and proposes measures to combat disinformation. In response to the risk of dissemination of online false information brought about by the new crown pneumonia epidemic in the second half of 2020, the European Commission released three sets of reports provided by the signatories of the online platform of the “Code of Conduct against False Information” in September, October and November, respectively. This set of reports is part of the EU’s “Tackling Covid-19 Disinformation – Monitoring and Reporting of the Right Facts” initiative. These three sets of reports, each around different time periods, evaluate the various actions taken by the above-mentioned platforms to limit the spread of disinformation related to the epidemic, and point out their deficiencies.
In the second half of 2020, the U.S. House of Representatives will consider and pass a number of epidemic-related fraud prevention bills. One is the Pandemic Scam Act of 2020, which was passed in November and sent to the Senate for consideration. The bill proposes that the U.S. Department of Justice, Health and Human Services inform the public about fraudulent information and complaints about the new coronavirus. The second is the New Coronavirus Fraud Prevention Act, which was passed and submitted to the Senate in September. The bill proposes that the U.S. Consumer Financial Protection Bureau and the Securities and Exchange Commission jointly establish a Consumer and Investor Fraud Prevention Working Group to provide consumers and investors with legal aid information and other information to protect consumers and investors during the epidemic. Free from fraud.
From the above-mentioned measures in Europe and the United States to limit the spread of epidemic-related false information, it can be seen that in the second half of 2020, major European and American countries have accelerated their actions to solve epidemic-related online fraud, and have the characteristics of strong operability.
(3) Election-related online false information governance actions tend to be intensive
In response to election security risks, the US election-related online disinformation governance actions tended to be intensive in the second half of the year. On the one hand, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) updated its guidance on preventing election cybersecurity risks several times in the second half of the year; on the other hand, multiple social networking platforms issued a joint statement to prevent election interference in response to election public opinion. war.
Since the US Cybersecurity and Infrastructure Security Agency released the “#Protect2020 Election Security Strategic Plan” in February 2020, the agency has updated the specific content of the #Protect2020 Election Security Strategic Plan at the implementation level several times in the second half of the year, focusing on the security risks of the US election. The U.S. Cybersecurity and Infrastructure Security Agency released Guidance for Assisting Confirmed, High-Risk, Symptomatic, and Quarantined Voters and Physical Safety at Polling Locations and Election Facilities for election officials on October 14 and October 19, respectively. guide”. Regarding the governance of election-related online disinformation, the US Cybersecurity and Infrastructure Security Agency released the “#Protect2020 Rumors vs Reality” webpage on October 20, pointing out some common election-related rumors and providing factual information to deal with social media. Rumors and election disinformation.
Since the second half of 2020, content censorship measures on Facebook, Twitter and other platforms have become more intensive than in the first half of the year in response to the public opinion war in the US election. In August, a number of large technology companies, including Facebook, Google, Twitter, and Microsoft, issued a joint statement, stating that they had begun regular meetings with relevant U.S. government departments to cooperate on preventing election interference, including banning suspected foreign forces. Supported social media platform accounts. In September, Facebook issued a statement saying it had taken down accounts and pages that allegedly had Russian backing to influence the election. Among them, the Russian Internet Research Institute (IRA) has been accused of having launched an “information war” in the 2016 US election to support Trump’s election. “The Russian Internet Research Institute Again: Thirteen (Accounts) Unlucky,” a report by social network analysis firm Graphika noted, “This move implies building a left-wing readership and deviating from the Biden campaign, and this move It’s the same as when the IRA suppressed voter support for Hillary Clinton in 2016.”
2. Countries show obvious differences in governance between strong and weak governance regarding the security issues brought about by new technologies such as artificial intelligence
At present, the competition for “technical sovereignty” is becoming more intense, and many countries have issued regulatory policies on new technologies and new applications. From the perspective of policy strength, the superficial differences between strong and weak supervision of artificial intelligence technology in Europe and the United States are actually the best choices based on the development of their own technology industries, and are ultimately aimed at building their own competitiveness.
In 2020, there is a lack of regulatory measures in European and American countries for the governance of artificial intelligence algorithms and precise online push content. At present, no obvious measures for the governance of online precise push content have been announced. A number of international scholars in the field of artificial intelligence research have put forward situation judgments and governance suggestions for the governance of precise online push content. Among them, Kathryn Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, said that social media platforms use algorithmic tools to prioritize information, which will enhance people’s cognitive biases and make traditional methods of identifying false information. No longer effective, however, there is currently no global strategy to address this. According to John Harlow, a smart city research expert at Emerson College’s Engagement Lab, the algorithms of Facebook, Twitter, YouTube and other social media have prioritized pushing extreme content over other content, and the problem may not be until 2030. Court takes regulatory action to change its business model and content delivery practices.
The EU’s regulation of online social media platforms shows the characteristics of transition from guiding industry self-regulation to legislative regulation. In July 2020, the European Commission issued guidelines for the amendment of the Audiovisual Media Services Directive, bringing social media into the scope of EU regulation for the first time. Previously, the EU’s regulation of social media was mainly characterized by guiding the self-discipline of the social media platform industry. The release of the revised guidelines for the “Audiovisual Media Services Directive” means that the EU’s supervision of social media has significantly strengthened. French President Emmanuel Macron has said that the amendment will be translated into national law by the end of 2020 and will be officially implemented from January 1, 2021.
In terms of online platform advertising governance, many countries have implemented anti-monopoly supervision on technology giants for online advertising, and the game of discourse power in the data economy has become more and more obvious. On 20 October 2020, MEPs overwhelmingly adopted the Digital Services Act Legislative Initiative Report to bring targeted advertising restrictions into regulation. Currently, web-targeted advertising remains an important part of the business model of many web content hosting platforms. In addition, the British antitrust agency also released a 437-page “Internet Platform and Digital Advertising Market Research Final Report” in July 2020, calling on the British government to introduce a competition supervision system to curb Google and Facebook’s digital advertising market. strength.
3. Many countries and international organizations have introduced measures to deal with the growing momentum of children’s online violence and bullying during the new crown pneumonia epidemic
The COVID-19 pandemic has made it more common for children to use the Internet at home. Since 2020, the international community has been trying to curb the violence of cyberbullying teenagers, among which measures to fill the gaps in the original system are more powerful. Especially in the second half of the year, the governance of cyber violence in many countries focused on restricting the behavior of online service providers.
In June 2020, the World Health Organization, UNICEF and other international agencies released the 2020 Global Status Report on the Prevention of Violence against Children. The report shows that while the Internet has become a key vehicle for children to learn, play and acquire knowledge during the pandemic, bullying, sexual exploitation, and other harmful online behaviors have also grown.
The International Telecommunication Union (ITU) released a new edition of the 2020 Child Online Protection Guidelines in June 2020 to support industry players in developing their internal child online protection policies. UNESCO member states have designated the first Thursday of November each year as the “International Day against School Violence and Bullying, including Cyberbullying”, which will be officially launched in 2020. UNESCO also co-hosted an online international conference with France on 5 November to discuss the increased risk of students being exposed to cyberbullying as a result of the COVID-19 pandemic.
In September 2020, the European Commission proposed the Interim Regulation on the Processing of Personal and Other Data to Combat Child Sexual Abuse, which aims to require online communication service providers to continue to monitor, report and remove online child sexual abuse content. This interim regulation is effective until December 31, 2025.
The U.S. House of Representatives introduced the Children’s Internet Design and Safety Act in September 2020. The bill proposes that operators of online platforms targeting children should be prohibited from promoting content such as online violence, sexual behavior, or using age-verifying information of children’s users for commercial purposes.
The UK Information Commissioner’s Office officially implemented the Age-appropriate Design Code in September 2020, encouraging online service providers to follow this set of Internet service design standards to better protect young people’s privacy. The Age Appropriate Design Code is a code of practice and not legally binding. It reflects the direction and hope of the UK Information Commissioner’s Office, and its role is to provide UK regulators with a standard of legality to regulate the conduct of online service providers. In terms of policy effectiveness, the government needs to do more to keep children safe online.
4. Countries have stepped up efforts to combat the increase in global cybercrime and other violent acts affected by the new crown pneumonia epidemic
Due to the epidemic of the new crown pneumonia, many original offline behaviors have been forced to be transferred online. Since the second half of the year, many countries have implemented special network law enforcement actions to punish cybercrime and cyber fraud, and the governance has been significantly strengthened.
In July 2020, the Secret Service, part of the U.S. Department of Homeland Security, established the Internet Fraud Task Force to combat financial crimes conducted over the Internet. With brick-and-mortar stores and bank branches forced to close and more people transacting online, the task force aims to tackle ransomware attacks, email phishing scams targeting U.S. businesses, and the theft of credit cards through digital commerce platforms financial and security threats posed by serious problems such as
The Australian Federal Police’s Cybercrime Operations Team has carried out a number of operations to combat online telecommunications fraud. In September 2020, the operations team partnered with NSW Police to conduct an enforcement operation against online fraud and SMS phishing attacks carried out by local cyber organisations against Australian financial institutions and their customers. The head of the action group said law enforcement agencies across Australia were pooling resources to further crack down on groups or individuals committing crimes across state lines.
In September 2020, Japan began to implement a package of countermeasures against cyber violence. One of the most important countermeasures is that for the first time, it is clearly stipulated that personal information such as mobile phone numbers of perpetrators of cyber violence can be legally disclosed, and online platforms are also obliged to provide the above information when necessary. According to the previous Japanese “Network Service Provider Liability Limitation Law”, victims of cyber violence have the right to request network service providers to provide the abuser’s personal information, but for operators, this practice is optional. The introduction of this package of countermeasures in Japan is considered to be an important support for preventing cyber violence in the future.
Europol released the results of its seventh annual cyber-organized crime threat assessment in October 2020 on the threat of cybercrime in the context of the pandemic. Europol pointed out that the epidemic has exacerbated the existing online organized crime problem, and the problems of phishing and online fraud are still serious. One of the main challenges facing law enforcement is how to obtain and collect relevant data for criminal investigations.
5. The “epidemic factor” is still an important driver affecting the cyberspace governance of various countries in the future
Affected by the COVID-19 epidemic this year, in the second half of 2020, countries have stepped up efforts to control and integrate the spread of online false information involving factors such as the COVID-19 outbreak and elections. In this situation, many countries have established working groups and issued guidelines to restrict the behavior of online service providers, so as to be able to deal with the disorderly state of chaotic cyberspace.
It can be said that the “epidemic factor” has become a characteristic high-frequency word for cyberspace governance in 2020, and even this factor will become the driving force for the network governance of various countries in 2021 with the continuous spread of the new crown pneumonia epidemic in the world.
In November 2020, the DigiCert cybersecurity expert team released the “DigiCert 2021 Security Prediction”, pointing out that if the US government provides more unemployment benefits in 2021, related online fraud will further increase. The rise of telehealth has also opened the door to cyberattacks, and some telehealth organizations with data security flaws are expected to be high-value targets for cyber hackers.
Permission access management manufacturer BeyondTrust also released “The Most Notable Cybersecurity Trends in 2021” in October, arguing that telecommuters will become the number one target of cyber hackers in 2021, and cybercriminals will try to infiltrate personal and even corporate networks and launch attacks against corporations. Cyber-attacks, cybercriminals may also use stolen personal information to carry out camouflage attacks.
In terms of time, in 2021, the spread of online false information, online fraud, and online extortion may increase, and governments in various countries will also increase their governance efforts. Moreover, cyberattacks around the growing demand for telemedicine, telecommuting, and unemployment benefits will increase. It may become the focus of national cyberspace governance in 2021.