Alphabet Inc.'s Google will implement more measures to identify and remove terrorist or violent extremist content from its video sharing platform YouTube, the company said in a blog post Sunday.

Google said it would take a tougher position on videos containing supremacist or inflammatory religious content by issuing a warning and not monetizing or recommending them for user endorsements, even if they do not clearly violate its policies.

The company will also employ more engineering resources and increase its use of technology to help identify extremist videos, in addition to training new content classifiers to quickly identify and remove such content.

"While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now," said Google's general counsel Kent Walker.

Google will expand its collaboration with counter-extremist groups to identify content that may be used to radicalize and recruit extremists, it said.

The company will also reach potential Islamic State recruits through targeted online advertising, and redirect them towards anti-terrorist videos in a bid to change their minds about joining.

Germany, France and Britain, where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other providers of social media such as Google and Twitter to do more to remove militant content and hate speech.

Facebook offered additional insight Thursday on its efforts to remove terrorism-related content, a response to political pressure in Europe over militant groups using the social network for propaganda and recruiting.

Facebook has ramped up use of artificial intelligence techniques such as image matching and language understanding to identify and remove content quickly, the company said in a blog post.