(June 9): US President Donald Trump’s attack on Twitter Inc. has highlighted how the European Union and the US are taking radically different approaches to overhaul how social media platforms should treat user content.

As both sides of the Atlantic move to update longstanding legal protections for internet platforms, Europe’s goal is obliging tech companies to cut back on hate speech and disinformation. In the US, Trump is seeking to strip legal protections if platforms engage in potential censorship or in any political conduct.

The US and EU rules, which protect social media companies and other platforms from liability for what users post on their sites, were designed more than 20 years ago to promote growth in the then-nascent internet sector and have since underpinned how the web works today.

Now, as the rules are being re-examined both in the EU and the US, the question for policymakers is how platforms should treat user-generated content posted to their sites -- which could consist of hate speech, incite violence or spread disinformation -- and what legal ramifications platforms should face with respect to those decisions.

While drawing Trump’s ire, Twitter’s decisions to add a fact-check label to his unsubstantiated claims about mail-in voting and a warning that a post about the protests in Minneapolis glorified violence, have garnered support from a senior EU official.

“I want platforms to become more responsible, therefore I support Twitter’s action to implement a transparent and consistent moderation policy,” said European Commission Vice President Vera Jourova in relation to the labels on Trump’s tweets.

“This is not about censorship,” Jourova said, speaking at an event streamed online last week, “it is about having some limits and taking some responsibility of what is happening in the digital world.”

Read more: Facebook Group Posts Shine Light on Racism in French Police

But Twitter’s fact-check label prompted Trump to unveil an executive order aimed at scrapping legal protections for social media sites that engaged in censorship or in any political conduct.

The measures, which, if enacted, are designed to force companies to be more hands-off about what users post, is drawing a distinct line with Europe over how to approach content-moderation policies.

In contrast, the EU is planning changes to its framework, which it is set to announce by year-end, so that platforms like Twitter and Facebook Inc. shoulder more responsibility if users spread hate speech or other illegal content. That means platforms could be obliged to scour their sites for those posts instead of acting as neutral conduits.

Recent European laws have already chipped away at the longstanding legal protections, for instance by requiring platforms to obtain licenses for copyrighted content before user posts are uploaded. In France and Germany, platforms can be fined if they fail to remove illegal hate speech and other content quickly enough. And various EU initiatives, including voluntary codes of conduct, have also pressured platforms to remove hate speech or demote disinformation.

Some of those previous initiatives have drawn concern from tech representatives, who say such rules harm freedom of speech by incentivizing firms to block more content than is necessary to avoid sanctions. Trump’s order, meanwhile, is also eliciting pushback from the tech community who worry it attempts to punish a private company for speech that the government doesn’t like, in violation of the First Amendment.

“There are significant differences between the executive order and European efforts to regulate intermediary liability, though both will have an impact on lawful speech,” said Matt Schruers, president of the Computer and Communications Industry Association, which represents Facebook and Alphabet Inc.’s Google.

While platforms like YouTube and Facebook are wary of shouldering too much liability for user posts, platforms have also suffered blows to their reputations in recent years for not doing enough to police activity, including for letting Russians spread disinformation across the sites to influence the 2016 US presidential election and the UK’s Brexit vote.

The pressure to do more is coming internally, too. After Facebook employees blasted their leader for his decision to leave the same Trump posts untouched, Chief Executive Officer Mark Zuckerberg eventually said the company would review some of its content policies.

As the EU prepares its so-called Digital Services Act, officials are looking to provide clearer responsibilities for platforms without scrapping the liability protection altogether, according to a person familiar with the matter. The rules will also seek to avoid creating incentives for over-removal of content, the person said.

For instance, under the new EU framework, platforms could be subject to fines if they don’t have adequate systems in place to remove or keep illegal content off their sites, rather than for individual decisions about a specific piece of content, the person said.

Another option under consideration, which has been pushed for by platform association Edima, is to remove any disincentives platforms might have to pursue illegal hate speech or other bad content on their sites. Under current EU laws, firms are only liable for content once they’ve been made aware of it, making it unattractive for them to proactively seek out such posts.

Meanwhile, the US' attempt to impose liability on tech companies may be more difficult to enact. Legal scholars have said the order is unlikely to survive a court challenge, like the one filed by the Center for Democracy and Technology, a non-profit group whose advisory council includes representatives from Facebook, Twitter, Amazon.com Inc. and others, claiming the edict violates free-speech protections.

In addition, Democrats’ views on hate speech and election misinformation in many ways mirror Europe’s, and the debate could shift again if they take power after the November elections.