We provide several free resources in the downloads section. The latest version of SenticNet is also available as a Python package and as a free API in many different languages. We also share some code on our GitHub account. Our most advanced tools, however, are only available on our corporate website. Here, you can try four demos of such tools for the sentiment analysis tasks of:
• concept extraction
• polarity detection
• aspect extraction
• multimodal affective interaction
A simple Twitter data visualization tool (Sentic Tweety) is also available here:
Before polarity detection can be performed, multiword expressions need to be extracted from text. Below is a demo of the concept parser, which quickly identifies commonsense concepts from free text without requiring time-consuming phrase structure analysis. From a sentence like “I went to the market to buy fruits and vegetables”, the parser extracts concepts such as go_to_market, market, buy_fruit, and buy_vegetable. The parser makes use of linguistic patterns to deconstruct natural language text into meaningful pairs, e.g., ADJ+NOUN, VERB+NOUN, and NOUN+NOUN, and then exploits commonsense knowledge to infer which of such pairs are more relevant in the current context. In this demo, the output is limited to 15 concepts.
Polarity detection is the most basic task of sentiment analysis and consists in the binary classification of text as either positive or negative. The demo leverages on linguistic patterns and relies on machine learning in case no patterns are matched. Please note that the task of subjectivity detection is not addressed here. Hence, the demo assumes that the input sentence is opinionated (not neutral). Also, the current version of the demo does not deal with comparative sentences such as "I love iPhone but Android is so much better".
Aspect extraction is a necessary pre-processing step for aspect-based sentiment analysis, i.e., the detection of polarity with respect to different product or service features (aspects) in stead of the overall polarity of the opinion. This is key for correctly calculating the polarity of sentences in which antithetic opinions about different aspects of the same product are expressed. From a sentence like “the touchscreen is good but the battery lasts very little”, for example, the aspect parser extracts touchscreen and battery.
MULTIMODAL AFFECTIVE INTERACTION
People are gradually shifting from text to video to express their opinion about products and services, as it is now much easier and faster to produce and share them. For the same reasons, potential customers are now more inclined to browse for video reviews of products they are interested in, rather than looking for lengthy written reviews. Another reason for doing this is that, while reliable written reviews are quite hard to find, it is sufficient to search for the name of the product on YouTube and choose the clips with most views in order to find good video reviews. Moreover, videos are generally more reliable than written text as reviewers usually reveal their identity. This demo adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio, and video) features to enhance the multimodal information extraction process. Some sample videos are available here.