Publications

Paraphrases do not explain word analogies

Many types of distributional word embeddings (weakly) encode linguistic regularities as directions (the difference between “jump” and “jumped” will be in a similar direction to that of “walk” and “walked,” and so on). Several attempts have been made to explain this fact. We respond to Allen and Hospedales’ recent (ICML, 2019) theoretical explanation, which claims that word2vec and GloVe will encode linguistic regularities whenever a specific relation of paraphrase holds between the four words involved in the regularity. We demonstrate that the explanation does not go through: the paraphrase relations needed under this explanation do not hold empirically. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

The code is available at: here The paper is also available at: here

Analogies minus analogy test: measuring regularities in word embeddings

Vector space models of words have long been claimed to capture linguistic regularities as simple vector translations, but problems have been raised with this claim. We decompose and empirically analyze the classic arithmetic word analogy test, to motivate two new metrics that address the issues with the standard test, and which distinguish between class-wise offset concentration (similar directions between pairs of words drawn from different broad classes, such as France-London, China-Ottawa,…) and pairing consistency (the existence of a regular transformation between correctly-matched pairs such as France:Paris::China:Beijing). We show that, while the standard analogy test is flawed, several popular word embeddings do nevertheless encode linguistic regularities.

In Proceedings of the 24th Conference on Computational Natural Language Learning (pp. 365–375). Association for Computational Linguistics.

The code is available at: here The paper is also available at: here