The idea of some algorithm that is capable of interpreting human conversation and identify what is hateful and what is not is a fantasy. Facebook, Google and others may thing that they can provide this type of service, but in the end they just prevent the use of language and become oppressive and the antithesis of what social media and the internet as a whole should be. In the face of Facebook the system appears similar to your common word and phrase filter than has managed to block the use of the word dyke (not a nice word). The word was being used by a group called listening2lesbians, but not in a negative connotation. They noticed that members of their group were being banned by Facebook and decided to take a look into why.
They are not the only group to suffer this form of censorship and will probably not be the last either. Facebook, for their part, are hiding behind their community standards policy as the reasons for banning people and deactivating pro LGBT pages that have been around for years; all because of a word. They are effectively silencing these groups through morality based censorship. This, very accurately, displays why companies should not do this. Not only is there a chance to suppress conversations of real groups, it also puts a reliance on people that might not understand the conversations (or that might have an agenda) to make the rules.
This is no better than trying to block "pornography" or piracy on the internet. Who actually gets to decide what is and is not proper? What qualifications do they have to decide this? Do they have a legal ground or obligation to block this in the country they are doing it in? In their actions are they infringing on any person's rights?
We hope that Facebook and others learn from this and remove the blocks. While they may come from a good place, they are too easy to abuse and to subvert to squash other forms of speech when it does not fit the accepted agenda.