If you’ve ever used a social listening or social monitoring tool to analyse sentiment, then you’re familiar with the inaccuracies that afflict all of these tools – from incorrect tagging to skewed sentiment percentages. Why does that happen?
The first problem is that some sentences aren’t easy to analyse. In fact, sentences aren’t positive or negative at all. The most common assumption in this subject is that subjective sentences always express some sentiment, while objective sentences don’t. That is often the case – after all, an objective sentence presents factual information, while a subjective sentence expresses personal feelings, views or beliefs. (More on this here.) However, it doesn’t always work that way.
For example, what’s the sentiment of a sentence like “I think I have the latest version of the browser”? Sure, it’s a subjective sentence, but it doesn’t express any sentiment. However, what about an objective sentence like “I opened the browser after updating it and it kept crashing”? It’s objective as it states a fact, yet it expresses an implicit opinion about the topic: the browser’s persistent crashing – negative.
With that said, while we know that the last sentence expresses a negative sentiment, how does a machine know that a browser that constantly crashes isn’t a good thing?
What Makes Sentiment Analysis Inaccurate Today?
One thing I’ve learnt studying linguistics is that language is complex. It would be too naive to oversimplify language thinking that its underlying sentiment can always be accurately examined by a machine or an algorithm.
There are five main factors that currently stop us from relying blindly on tools for sentiment analysis:
- Context: a positive or negative sentiment word can have the opposite connotation depending on context. “I’ve done a great job” may be interpreted as a positive statement. However, in “my internet provider does a great job when it comes to stealing money from me”, doing a great job is no longer a positive thing, based on the context (“stealing money from me”).
- Sentiment Ambiguity: a sentence with a positive or negative word doesn’t necessarily express any sentiment. For example, “can you recommend a good tool I could use?” doesn’t express any sentiment, although it uses the positive sentiment word “good“. Likewise, sentences without sentiment words can express sentiment too. So, “this browser uses a lot of memory” doesn’t contain any sentiment words, although it clearly expresses a negative sentiment.
- Sarcasm: a positive or negative sentiment word can switch sentiment if there is sarcasm in the sentence. “Sure, I’m happy for my browser to crash right in the middle of my coursework” is obviously a sarcastic (and negative) statement, even though it has the positive word “happy”. We can detect the sarcasm mainly from how the sentence starts with “sure”, and the context (we know for a fact that a browser crashing is negative).
- Comparatives: social listening tools often misunderstand comparative statements. For example, what’ s the sentiment of “Pepsi is much better than Coke”? If you’re reporting for Pepsi, then this is definitely a positive statement. However, if you work for Coca Cola and you’re reporting back to the company, then this statement would be negative. Most social listening tools aren’t intelligent enough to “pick sides” when they find comparative statements like the above, leaving them to pick the sentiment based on keywords. So, the previous example would be tagged as “positive” as it contains a positive keyword, “much better”, regardless of who you’re reporting for.
- Regional Variations: a word can change sentiment and meaning depending on the language used. This is often seen in slang, dialects, and language variations. An example is the word “sick“, which can change meaning based on context, tone and language, although clear to the target audience (“That is a sick song!” vs. “I’m not feeling well at all, I might be sick”). An example of a regional variation can be found between British and American English for words like ‘quite’, ‘rather’, ‘pretty’: in British English those words take the meaning of “fairly”, while in American English they take the meaning of “very”. This can sometimes be misunderstood in day-to-day conversations too, so it’s no wonder that tools may find this problematic.
The biggest threat to accuracy in sentiment analysis today is human concordance: this is the degree of agreement among humans (or between humans and machines). Numerous studies have shown that the rate of human concordance is between 70% and 80%. (More on the subject here.)
Even humans don’t agree universally with one another on anything subjective, so it’s no wonder that sentiment, when rendered in text with no extra-textual clues (visual, aural etc.) can be really tricky for us humans, let alone for machines. Hence, sadly, we’re nowhere close to perfect sentiment analysis.
However, despite its current flaws, social sentiment has very high potential. While it’s easy to treat it as a “soft metric”, it’s very useful when used in context – yes you’ve received a positive review, but what does it mean for your brand? What are the underlying opinions behind that specific content? In the right hands, sentiment can be the key to various social analyses, predictions, and ultimately a solid insight into your social performance.
Pingback: The Ultimate Buyer's Guide to Social Command Centres()
Pingback: Our Two Cents on Sentiment Analysis | The Mavericks()