Automated Sentiment Detection Round 2: 80% Accuracy Confirmed for Blogs and Unstructured Content

by Steve Broback on March 15, 2008

I have more data points relevant to yesterday’s post. Bottom line: Yes, you need non-trivial human involvement to go beyond 80 percent accuracy with unstructured content like blogs. Text-mining vendors claim that for many projects 80 percent is perfectly adequate though. Based on what I’m reading, I think there is likely a market for a process like ours that can automate the tagging and extraction/compilation of relevant content at high (90 percent plus) accuracy levels.

After drafting yesterday’s post about mining blog sentiment I discovered a Feb 27 post on the SentimentMetrics blog which reinforced what I’d heard from other gurus in the space. The SentimentMetrics blogger, (Leon? — posts don’t list the name of the author) says:

“SentimentMetrics uses an automated approach and we are currently at an 80% accuracy which is considered good in the industry…”

In addition, Mark Anderson responded to my post yesterday with a comment on his own blog. Anderson clarified:

“If you are working with longitudinal data, comparing month to month for instance, or comparing different products and brands then extremely accurate sentiment reading isn’t necessary as you are really looking for differences between groups. Additionally by considering the relationship between positive and negative sentiment in trended data (they tend to be positively correlated) when the correlation changes, in other words in one month for one brand you might see that negative sentiment increases while positive decreases, this signals a possible ‘event’ is occurring which needs to be drilled down into for further investigation.

However, for some of our clients in the past (such as Unilever), an extremely accurate level of sentiment was desired. Our methodology (AA-TextSM) relies on triangulation for validation, and we have sentiment accuracy in high nineties in most cases when applying this technique. Because most of our projects are ad-hoc in nature, the human factor is very important, so Anderson Analytics, more so than those companies focusing solely on a large volume of blog posts usually invest the time in perfecting custom dictionaries and understanding the special relationships between words in each project.

As you mention, many survey open ends are rather structured. On the other hand many are not. For instance if you ask a hotel guest to rate their overall satisfaction on a 10 point scale, then ask, why did you give this rating in an open ended question, you will get anything but structured answers. Our methodology has been used on other types of data as well though (call center logs, emails etc.).”

It sounds like the AA-TextSM system requires human involvement to customize the algorithmic process. In that last paragraph, Anderson attests that surveys can contain unstructured data. It seems to me that without getting humans involved (like to create custom dictionaries) you fade back to 80 percent accuracy when analyzing those unstructured portions.

{ 2 comments… read them below or add one }

1 Tom H C Anderson 03.16.08 at 8:19 am

Yes, there is definitely “non trivial” expert human involvement in AA-TEXT. Though some of the learning we have discovered using the methodology could be fed back into a more automated process in the future…

Naturally anything that can bring sentiment accuracy up would be useful. I am rather skeptical when software vendors currently make claims at above 80% accuracy using automated systems without human involvement.

-Tom

2 Steve Broback 03.17.08 at 11:00 pm

Thanks Tom, this is a big help as we try to better understand the monitoring space. Based on articles I’ve been reading it appeared one vendor had either cracked the code, or just has great PR. It appears to be the latter.

Sponsored links

advertise here