Tuesday, May 12, 2026

Studying Phrase Vectors for Sentiment Evaluation: A Python Replica

Share


We automated the evaluation and made the code obtainable on GitHub.

got here to me once I tried to breed the paper “Studying Phrase Vectors for Sentiment Evaluation” by Maas et al. (2011).

On the time, I used to be nonetheless in my ultimate 12 months of engineering college. The aim was to breed the paper, problem the authors’ strategies, and, if attainable, examine them with different phrase representations, together with LLM-based approaches.

What struck me was how easy and chic the tactic was. In a approach, it jogged my memory of logistic regression in credit score scoring: easy, interpretable, and nonetheless highly effective when used accurately.

I loved studying this paper a lot that I made a decision to share what I realized from it.

I strongly suggest studying the unique paper. It would aid you perceive what’s at stake in phrase illustration, particularly how you can analyze the proximity between two phrases from each a semantic perspective and a sentiment polarity perspective, given the particular contexts by which these phrases are used.

At first, the mannequin appears easy: construct a vocabulary, be taught phrase vectors, incorporate sentiment info, and consider the outcomes on IMDb evaluations.

However once I began implementing it, I noticed that a number of particulars matter so much: how the vocabulary is constructed, how doc vectors are represented, how the semantic goal is optimized, and the way the sentiment sign is injected into the phrase vectors.

On this article, we’ll reproduce the principle concepts of the paper utilizing Python.

We’ll first clarify the instinct behind the mannequin. Then we’ll current the construction of knowledge used within the article, assemble the vocabulary, implement the semantic element, add the sentiment goal, and eventually consider the realized representations utilizing the linear SVM classifier.

The SVM will enable us to measure the classification accuracy and examine our outcomes with these reported within the paper.

What drawback does the paper resolve?

Conventional Bag of Phrases fashions are helpful for classification, however they don’t be taught significant relationships between phrases. For instance, the phrases fantastic and wonderful must be shut as a result of they specific related which means and related sentiment. However, fantastic and horrible could seem in related film evaluate contexts, however they specific reverse sentiments.

The aim of the paper is to be taught phrase vectors that seize each semantic similarity and sentiment orientation.

Information construction

The dataset accommodates:

  • 25,000 labeled coaching evaluations or paperwork
  • 50,000 unlabeled coaching evaluations
  • 25,000 labeled check evaluations

The labeled evaluations are polarized:

  • Damaging evaluations have rankings from 1 to 4
  • Optimistic evaluations have rankings from 7 to 10

The rankings are linearly mapped to the interval [0, 1], which permits the mannequin to deal with sentiment as a steady chance of optimistic polarity.

aclImdb/
├── prepare/
│   ├── pos/    "0_10.txt"   -> evaluate #0, 10 stars, very optimistic
│   │           "1_7.txt"    -> evaluate #1, 7 stars, optimistic
│   ├── neg/    "10_2.txt"   -> evaluate #10, 2 stars, very damaging
│   │           "25_4.txt"   -> evaluate #25, 4 stars, damaging
│   └── unsup/  "938_0.txt"  -> evaluate #938, 0 stars, unlabeled
└── check/
    ├── pos/    optimistic evaluations, by no means seen throughout coaching
    └── neg/    damaging evaluations, by no means seen throughout coaching

We are able to subsequently retailer every doc in a Evaluation class with the next attributes: textual content, stars, label, and bucket.

In fact, it doesn’t must be a category particularly named Evaluation. Any object can be utilized so long as it gives a minimum of these attributes.

from dataclasses import dataclass
from typing import Non-obligatory

@dataclass
class Evaluation:
    textual content: str
    stars: int            
    label: str               
    bucket: str

Vocabulary building

The paper builds a set vocabulary by first ignoring the 50 most frequent phrases, then protecting the following 5,000 most frequent tokens.

No stemming is utilized. No normal stopword elimination is used. That is necessary as a result of some stopwords, particularly negations, can carry sentiment info.

Earlier than constructing this vocabulary, we first want to have a look at the uncooked information.

We observed that the evaluations usually are not totally cleaned. Some paperwork include HTML tags, so we take away them throughout the information loading step. We additionally take away punctuation hooked up to phrases, akin to ".", ",", "!", or "?".

It is a slight distinction from the unique paper. The authors hold some non-word tokens as a result of they could assist seize sentiment. For instance, "!" or ":-)" can carry emotional info. In our implementation, we select to take away this punctuation and later consider how a lot this resolution impacts the ultimate mannequin efficiency.

When working with textual content information, the following query is at all times the identical:

How ought to we characterize paperwork and phrases numerically?

The authors begin by gathering all tokens from the coaching set, together with each labeled and unlabeled evaluations. We are able to consider this as placing all phrases from the coaching paperwork into one massive basket.

Then, to characterize phrases in an area the place we are able to prepare a mannequin, they construct a set of phrases known as the vocabulary.

The authors construct a dictionary that maps every token, which we’ll loosely name a phrase, to its frequency. This frequency is solely the variety of instances the token seems within the full coaching set, together with each labeled and unlabeled evaluations.

Then they choose the 5,000 most frequent phrases, after eradicating the 50 most frequent phrases.

These 5,000 phrases type the vocabulary V.

Every phrase in V will correspond to 1 column of the illustration matrix R. The authors select to characterize every phrase in a 50-dimensional house. Due to this fact, the matrix R has the next form:

Rβ=50×|V|=5000R in mathbb^

Every column of R is the vector illustration of 1 phrase:ϕw=Rw phi_w = Rw

The aim of the mannequin is to be taught this matrix R in order that the phrase vectors seize two issues on the similar time:

  • Semantic info, which means phrases utilized in related contexts must be shut;
  • Sentiment info, which means phrases carrying related polarity, must also be shut.

That is the central thought of the paper.

As soon as the info is loaded, cleaned, and the vocabulary is constructed, we are able to transfer to the development of the mannequin itself.

The primary a part of the mannequin is unsupervised. It learns semantic phrase representations from each labeled and unlabeled evaluations.

Then, the second half provides supervision through the use of the star rankings to inject sentiment into the identical vector house.

Semantic element

The semantic element defines a probabilistic mannequin of a doc.

Every doc is related to a latent vector theta. This vector represents the semantic path of the doc.

Every phrase has a vector illustration ϕw phi_w, saved as a column of the matrix R.

The chance of observing a phrase w in a doc is given by a softmax mannequin:

p(w|θ;R,b)=exp(θϕw+bw)wVexp(θϕw+bw)p(w mid theta; R, b) = fracS_kD

Intuitively, a phrase turns into possible when its vector ϕwphi_w is effectively aligned with the doc vector theta.

MAP estimation of theta

The mannequin alternates between two steps.

First, it fixes R and b and estimates one theta vector for every doc.

Then, it fixes theta and updates R and b.

The theta vectors usually are not saved as ultimate parameters. They’re momentary document-specific variables used to replace the phrase representations.

To estimate the parameters of the mannequin, the authors use most probability.

The thought is straightforward: we need to discover the parameters R and b that make the noticed paperwork as possible as attainable beneath the mannequin.

Ranging from the probabilistic formulation of a doc, they introduce a MAP estimate θ̂ₖ for every doc dₖ. Then, by taking the logarithm of the probability and including regularization phrases, they get hold of the target perform used to be taught the phrase illustration matrix R and the bias vector b:

νRF2+dokDλθ^ok22+i=1Noklogp(wi|θ^ok;R,b)nu |R|_F^2 + sum_ lambda |hatnu_k|_2^2 + sum_^ log p(w_i mid hat_k; R, b)

which is maximized with respect to R and b. The hyperparameters within the mannequin are the regularization weights (λ and ν) and the phrase vector dimensionality β.

On this step, we be taught the semantic illustration matrix. This matrix captures how phrases relate to one another primarily based on the contexts by which they seem.

Sentiment element

The semantic mannequin alone can be taught that phrases happen in related contexts. However this isn’t sufficient to seize sentiment.

For instance, fantastic and horrible could each happen in film evaluations, however they specific reverse opinions.

To resolve this, the paper provides a supervised sentiment goal:

p(s=1|w;R,ψ)=σ(ψϕw+bc)p(s = 1 mid w; R, psi) = sigma(psi^prime phi_w + b_c)

The vector ψ defines a sentiment path within the phrase vector house. Right here, solely the labelled information are used.

If a phrase vector lies on one facet of the hyperplane, it’s thought of optimistic. If it lies on the opposite facet, it’s thought of damaging.

They mixed the sentiment goal and the sentiment half to construct the ultimate and the complete goal studying:

animationS_k sum_^ log P(s_k mid w_i; R, psi, b_c) endS_k”>νRF2+ok=1|D|λθ^ok22+i=1NoklogP(wi|θ^ok;R,b)+ok=1|D|1|Sok|i=1NoklogP(sok|wi;R,ψ,bc)beginS_k nu |R|_F^2 &+ sum_^nu lambda |hatnu_k|_2^2 + sum_{i=1}^{N_k} log P(w_i mid hat{theta}_k; R, b) &+ sum_{ok=1}^ fracanimation sum_{i=1}^{N_k} log P(s_k mid w_i; R, psi, b_c) finish{aligned}

The primary half learns semantic similarity. The second half injects sentiment info. The regularization phrases stop the vectors from rising too massive.

|SokS_k| denotes the variety of paperwork within the dataset with the identical rounded worth of soks_k. The weighting animation“>1|Sok|fracanimation is launched to fight the well-known imbalance in rankings current in evaluate collections.

Classification and outcomes

As soon as the phrase illustration matrix R has been realized, we are able to use it to construct document-level options.

The target is now to categorise every film evaluate as optimistic or damaging.

To do that, the authors prepare a linear SVM on the 25,000 labeled coaching evaluations and consider it on the 25,000 labeled check evaluations.

The necessary query will not be solely whether or not the phrase vectors are significant, however whether or not they assist enhance sentiment classification.

To reply this query, we consider a number of doc representations and examine them with the outcomes reported in Desk 2 of the paper.

The one factor that adjustments from one configuration to a different is the best way every evaluate is represented earlier than being handed to the classifier.

1. Bag of Phrases baseline

The primary illustration is a regular Bag of Phrases. Within the paper, this baseline is reported as Bag of Phrases (bnc). The notation means:

  • b = binary weighting
  • n = no IDF weighting
  • c = cosine normalization

A evaluate or doc is represented by a vector v of dimension 5000, as a result of the vocabulary accommodates 5,000 phrases.

For every phrase j within the vocabulary:

νj={1if phrase j seems in the evaluate0in any other casenu_j = start{instances} 1 & textual content{if phrase } j textual content{ seems within the evaluate} 0 & textual content{in any other case} finish{instances}

So this illustration solely information whether or not a phrase seems a minimum of as soon as. It doesn’t depend what number of instances it seems.

Then the vector is normalized by its Euclidean norm:

νbnc=νν2nu_{bnc} = frac{nu}_2

This offers the Bag of Phrases baseline used to coach the SVM.

This baseline is powerful as a result of sentiment classification typically depends on direct lexical clues. Phrases akin to glorious, boring, terrible, or nice already carry helpful sentiment info.

2. Semantic-only phrase vector illustration

The second illustration makes use of the phrase vectors realized by the semantic-only mannequin.

The authors first characterize a doc as a Bag of Phrases vector v. Then they compute a dense doc illustration by multiplying this vector by the realized matrix:

zsemantic=Rsemantic×νz_{textual content{semantic}} = R_{textual content{semantic}} instances nu

The place Rsemantic50×5000, ν5000zsemantic50R_{textual content{semantic}} in mathbb{R}^{50 instances 5000}, nu in mathbb{R}^{5000} quadimpliesquad z_{textual content{semantic}} in mathbb{R}^{50}

This vector will be interpreted as a weighted mixture of the phrase vectors that seem within the evaluate.

Within the paper, when producing doc options by means of the product Rv, the authors use bnn weighting for v. This implies:

  • b = binary weighting
  • n = no IDF weighting
  • n = no cosine normalization earlier than projection

Then, after computing Rv, they apply cosine normalization to the ultimate dense vector.

So the ultimate illustration is:

zsemantic=RsemanticνRsemanticν2bar{z}_{textual content{semantic}} = frac{R_{textual content{semantic}} nu}{| R_{textual content{semantic}} nu |_2}

This illustration makes use of semantic info realized from the coaching evaluations, together with each labeled and unlabeled paperwork.

3. Full semantic + sentiment illustration

The third illustration follows the identical building, however makes use of the complete matrix Rfull​.

This matrix is realized with each elements of the mannequin:

  • the semantic goal, which learns contextual similarity between phrases;
  • The sentiment goal, which injects polarity info from the star rankings.

For every doc, we compute:

zfull=Rfullνz_{textual content{full}} = R_{textual content{full}} nu

Then we normalize:

zfull=RfullνRfullν2bar{z}_{textual content{full}} = frac{R_{textual content{full}} nu}{| R_{textual content{full}} nu |_2}

The instinct is that RfullR_{full} ought to produce doc options that seize each what the evaluate is about and whether or not the language is optimistic or damaging.

That is the principle contribution of the paper: studying phrase vectors that mix semantic similarity and sentiment orientation.

4. Full illustration + Bag of Phrases

The ultimate configuration combines the realized dense illustration with the unique Bag of Phrases illustration.

We concatenate the 2 representations to acquire:

x=[zfullνbnc]x = left[ bar{z}_{text{full}} ;middle|; nu_{bnc} right]

This offers the classifier two complementary sources of knowledge:

  • a dense 50-dimensional illustration realized by the mannequin;
  • a sparse lexical illustration that preserves actual word-presence info.

This mix is helpful as a result of phrase vectors can generalize throughout related phrases, whereas Bag of Phrases options hold exact lexical proof.

For instance, the dense illustration could be taught that fantastic and wonderful are shut, whereas the Bag of Phrases illustration nonetheless preserves the precise presence of every phrase.

We then prepare a linear SVM on the labeled coaching set and consider it on the check set.

This permits us to reply two questions.

First, do the realized phrase vectors enhance sentiment classification?

Second, does including sentiment info to the phrase vectors assist past semantic info alone?

Implementation in Python

We implement the mannequin in 5 steps:

  1. Load and clear the IMDb dataset
  2.  Construct the vocabulary
  3. Practice the semantic element
  4. Practice the complete semantic + sentiment mannequin
  5. Consider the realized representations utilizing SVM

The desk beneath exhibits the closest neighbors of chosen goal phrases within the realized vector house.

For every goal phrase, we report the 5 most related phrases in response to cosine similarity. The complete mannequin, which mixes the semantic and sentiment aims, tends to retrieve phrases which might be shut each in which means and in sentiment orientation. The semantic-only mannequin captures contextual and lexical similarity, but it surely doesn’t explicitly use sentiment labels throughout coaching.

The desk beneath compares our outcomes with the outcomes reported within the paper. For every illustration, we prepare a linear SVM on the labeled coaching evaluations and report the classification accuracy on the check set. This permits us to guage how effectively every doc illustration performs on the IMDb sentiment classification process.

Our consequence vs outcomes paper.

The complete mannequin may be very near the consequence reported within the paper. This means that the sentiment goal is applied accurately.

The biggest hole seems within the semantic-only mannequin. This will come from optimization particulars, preprocessing, or the best way document-level options are constructed for classification.

Conclusion

On this article, we reproduced the principle elements of the mannequin proposed by Maas et al. (2011).

We applied the semantic goal, added the sentiment goal, and evaluated the realized phrase vectors on IMDb sentiment classification.

The mannequin exhibits how unlabeled information may also help be taught semantic construction, whereas labeled information can inject sentiment info into the identical vector house.

It is a easy however highly effective thought: phrase vectors shouldn’t solely seize what phrases imply, but in addition how they really feel.

Whereas this put up doesn’t cowl each element of the paper, we extremely suggest studying the authors’ authentic work. Our aim was to share the concepts that impressed us and the enjoyment we discovered each in studying the paper and penning this put up.

We hope you get pleasure from it as a lot as we did.

Picture Credit

All photographs and visualizations on this article have been created by the writer utilizing Python (pandas, matplotlib, seaborn, and plotly) and excel, until in any other case said.

References

[1] 𝗔𝗻𝗱𝗿𝗲𝘄 𝗟. 𝗠𝗮𝗮𝘀, 𝗥𝗮𝘆𝗺𝗼𝗻𝗱 𝗘. 𝗗𝗮𝗹𝘆, 𝗣𝗲𝘁𝗲𝗿 𝗧. 𝗣𝗵𝗮𝗺, 𝗗𝗮𝗻 𝗛𝘂𝗮𝗻𝗴, 𝗔𝗻𝗱𝗿𝗲𝘄 𝗬. 𝗡𝗴, 𝗮𝗻𝗱 𝗖𝗵𝗿𝗶𝘀𝘁𝗼𝗽𝗵𝗲𝗿 𝗣𝗼𝘁𝘁𝘀. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the forty ninth Annual Assembly of the Affiliation for Computational Linguistics: Human Language Applied sciences, pages 142–150, Portland, Oregon, USA. Affiliation for Computational Linguistics.

Dataset: IMDb Large Movie Review Dataset (CC BY 4.0).



Source link

Read more

Read More