Now Reading
Google Sets Inaccurate & Offensive Content in its Crosshairs
0

Google’s latest quality raters’ guidelines have been released, and have been updated to give raters new ways of flagging sites with content that could be considered upsetting, offensive or inaccurate.

That there is upsetting, offensive or harmful content on the internet should be abundantly clear to anyone who’s ever innocently searched for the best way to celebrate a particularly bountiful lemon harvest; for how, as just one man, you should tackle opening one tricky jar; or for the best way to make waffles appropriate for a 65th wedding anniversary.

Not that it should be surprising to anyone, really.

The same goes for inaccurate content. FAKE NEWS. Sad!

Google has been working on ways to combat the spread of FAKE NEWS for some time, particularly since the US election. These new guidelines take it further by employing stricter criteria by which to judge news sites’ quality, as well brand new provisions for explicitly targeting offensive or upsetting content.

The changes have caused some to question Google’s rights to act as an effective judge of taste, with some criticising the search engine for pandering to softy-lefty-liberal-snowflake political correctness.

On the face of it, there’s a genuine question to be raised about Google’s supposed ideological neutrality. But a few millimetres below the face of it, where sensationalists fear to tread, lies the actual truth of that matter, which isn’t nearly so scary.

How the Quality Rater Guidelines Work

In order to improve its algorithm used to assess websites’ quality, Google employs thousands of actual real humans to go through and, well, assess the quality sites that appear at the top of results pages for a given set of queries. These people are known as quality raters, and the 160 page guidelines document is designed to show them exactly how they should be rating said quality.

Importantly, these quality raters do not have a final say over what sites rank; far from it. Rather, their job is effectively to test how well the algorithm is working, and to provide data that engineers can then use to tweak it as appropriate.

A google quality rater celebrating a successful judgement with a dab, as is customary

The raters will give each site a general overall quality score (based on things like accuracy of the content displayed, presence of ads, etc) and will also assess whether it meets the needs of the searcher, given the query used.

The ‘Needs Met’ assessment is very important. Take a site like the Daily Mash, for example. The information on there is quite clearly false, but it’s not pretending to be true. It’s a piece of satire. And so despite hosting blatantly inaccurate information, it would receive a reasonably high ‘Needs Met’ score (so long as the user is looking for satire and not actual news). The inaccuracy would not count against its quality score.

On the other hand, sites “which appear to be deliberate attempts to misinform or deceive users by presenting factually inaccurate content (e.g., fake product reviews, demonstrably inaccurate news, etc.)” would be given the lowest possible quality score. Misinformation is only problematic if it is presented as fact.

These latest guidelines now include news sites in a category for which the page quality rating standards are much higher than usual. These pages, called Your Money or Your Life (YMYL) pages are those which “could potentially impact the future happiness, health, or financial stability of users.”

Quality is judged by a few different criteria, but relevant here would be the condition that: “High quality news articles should contain factually accurate content presented in a way that helps users achieve a better understanding of events.”

Basically, the new guidelines urge quality raters to pay extra close attention to the truth and accuracy of claims made on sites that claim to be providing factual news articles. So far, so uncontroversial, right?

But wait, there’s more…

The Upsetting-Offensive Flag

Part of the quality raters job involves flagging certain sites that meet certain criteria – e.g. whether or not it contains porn, or whether or not it’s written in a foreign language.

Brand new for this latest edition of the guidelines is a section covering websites that should be flagged as offensive or upsetting.

The guidelines state:

“Please assign the Upsetting­-Offensive flag to all web results that contain upsetting or offensive content from the perspective of users in your locale, even if the result satisfies the user intent.”

The following list of what kind of content might typically be considered offensive is pulled straight from the document:

  • Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
  • Content with racial slurs or extremely offensive terminology.
  • Graphic violence, including animal cruelty or child abuse.
  • Explicit how-­to information about harmful activities (e.g., how­-tos on human trafficking or violent assault).
  • Other types of content which users in your locale would find extremely upsetting or offensive.

Examples of pages deserving of the flag are also given:

Importantly, the point of the upsetting-offensive flag here is not to discredit sites across the board, but rather to ensure that such content only appears for those who are actually looking for it.

“As a general rule of thumb, Upsetting-­Offensive results contain content which is so upsetting or offensive that it should only be shown if the query is explicitly seeking this type of content.”

This is where the ‘Needs Met’ criteria comes in. As the example below shows, if someone searches for ‘Stormfront’ then the white supremacist forum will show up at the top of the results. It is still deserving of the ‘Upsetting-Offensive’ flag, but given that it is appropriate for the query, that will not exclude it from the SERP. Take even the holocaust denial page in the first example. That particular page will be relevant to certain queries, and Google have made it clear that they are not in the business of deciding what kind of content people should want to view online – despite what the Paul Joseph Watsons of the world would have you believe. Those who want to find racist drivel should be (and are) able to find it; but those looking for historical fact should not be presented with blatantly false and offensive nonsense.

To reiterate: Google is refining, not censoring search results.

All that being said, there is room for a healthy dose of scepticism – here as everywhere. The problem is that the decision as to what counts as offensive is made by humans, with their varying beliefs, tastes, and thicknesses of skin. As the old adage goes: offence is taken, not given. The last example given above is one that naysayers have jumped on – for those who actually believe that Islam is an evil, violent belief system, that website is neither offensive nor inaccurate.

For Google’s critics, it’s taken as an example of ideology creeping through what should be a value-neutral judgement of relevance to a given query.

To give you a completely 100% fair and representative example of said naysayers, here’s a snippet of some of the below the line comments on one article about these latest guideline changes:

Anyway…

The thing is, while an argument against Google’s decision to start filtering search results based on how potentially offensive they are can just about be made in principle, in reality the effect of all of this is minimal. And what little effect there is is more or less undeniably positive.

We’ve been told that these new changes will affect the results for at most 0.1% of queries, and that’s when the quality raters’ judgements actually make their way into the algorithm. And while there will likely be fringe cases when the offensiveness of a site is up for debate (just as there would be regarding a result’s relevance to a query), I would challenge anyone to, in all sincerity, argue that an article on an openly racist website denying that the holocaust happened should be appearing anywhere near the top of a search for ‘holocaust history’. The same goes for vicious polemics when a user searches for information about a religion.

If the work of the quality raters helps Google filter results so that more upsetting or offensive content is only shown to those who ask for it, more power to them.

Enjoyed this Article?

Signup now and receive articles like this directly to your inbox!

We will never give away, trade or sell your email address. You can unsubscribe at any time.

About The Author
Danny Lord
As well as being our resident philosopher and mixologist, Danny is an experienced and talented writer and researcher who combines a passion for online marketing with an appetite for knowledge and a flair for writing to produce quality, insightful content in a variety of fields.

Leave a Response