FACEBOOK has launched a new guide for the Muslim community to help improve online safety and give advice on reporting hate speech.

Keeping Muslims Safe Online: Tackling Hate and Bigotry includes information on how to tackle and respond to hateful content on the social network in a positive way, as well as details on how to report content inspired by terrorism.

It has been produced in partnership with community group Faith Associates and will be made available online as well as distributed to mosques around the UK.

The new guide is the first piece of work to come from Facebook's Online Civil Courage Initiative (OCCI), which launched earlier this year as a programme to fight extremism and hate speech online.

Faith Associates chief executive Shaukat Warraich said the guide would be an important tool for Muslims on the site.

"We have seen the dramatic rise of Islamophobic rhetoric and terrorist-inspired content that manifests online and appreciate Facebook's effort in supporting grass roots organisations to enhance the knowledge and develop the skills of Muslim users to directly deal with it when they see it," he said.

"I am confident that this guide will be well received in the Muslim community and will empower Muslim users of Facebook with the confidence and resilience to become effective in identifying and tackling the risks that exist online.

"The guide will also aid the development of mosques, madrassas and imams in their efforts to deliver a holistic service to their constituencies.

"Faith Associates will be working closely with them to help deliver training on e-safety and facilitate spaces for conversation around pertinent social issues."

Facebook has received extensive criticism in the past over its policing of explicit, dangerous and extreme content on the site, and has been warned by the Government it must be more proactive in removing such content.

Simon Milner, the social network's head of policy in the UK, said the company is using multiple techniques to fight unsuitable material.

"Facebook welcomes all communities and there is no place for hate on the platform," he said.

"We work in a number of ways to tackle this issue - from the use of artificial intelligence to find and remove terrorist propaganda, to our teams of counter-terrorism experts and reviewers around the world working to keep extremist content off our platform.

"Partnerships with others - including tech companies, civil society, researchers and governments - are also a crucial piece of the puzzle and some of our most effective partnerships are focused on counter-speech, which means encouraging people to speak out against violence and extremism."

In a blog post, Facebook's global head of policy Monika Bickert revealed the use of artificial intelligence was "showing promise".

"Today, 99% of the ISIS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site," she said.

"We do this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once we are aware of a piece of terror content, we remove 83% of originally surfaced and subsequently uploaded copies within one hour of upload.

"As we deepen our commitment to combating terrorism by using AI, leveraging human expertise and strengthening collaboration, we recognise that we can always do more.

"We'll continue to provide updates as we develop new technology and forge new partnerships in the face of this global challenge."