This service computes semantic relations between words in Russian and provides pre-trained distributional semantic models (word embeddings). It is named after RusCorpora, the site for the Russian National Corpus. They provide access to corpora, we provide access to semantic vectors (vectōrēs in Latin). These vectors reflect meaning based on word co-occurrence distribution in the training corpora (huge amounts of raw linguistic data).
In distributional semantics, words are usually represented as vectors in a multi-dimensional space of their contexts. Semantic similarity between two words is then calculated as a cosine similarity between their corresponding vectors; it takes values between -1 and 1 (usually only values above 0 are used in practical tasks). 0 value roughly means the words lack similar contexts, and thus their meanings are unrelated to each other. 1 value means that the words' contexts are absolutely identical, and thus their meaning is very similar.
Distributional semantics is under the hood of almost all contemporary natural language understanding systems. As a rule, the so called predictive models are employed, which learn hiqh-quality dense vectors representing word meaning (embeddings). These models are often trained using shallow artifical neural networks. One of the first and arguably the most well-known tool in this field now is word2vec, but new models and algorithms are published regularly.
Unfortunately, training word embedding models on large corpora can be computationally expensive. That's why it is important to provide access to pre-trained models to the Russian linguistic community. We feature ready-made models trained on several Russian corpora, and a convenient web interface to query them. You can also download the models to process them on your own. Moreover, our web service features a bunch of (hopefully) useful visualizations of semantic relations between words. In general, the reason behind RusVectōrēs is to lower the entry threshold for those who want to work in this new and exciting field.
RusVectōrēs is basically a tool to explore relations between words in distributional models. You can think about it as a kind of `semantic calculator'. A user can choose one or several carefully prepared models trained on different corpora
After choosing a model, it is possible to:
In the spirit of Semantic Web, each word in each model has its own unique URI explicitly stating lemma, model and part of speech (for example, https://rusvectores.org/en/ruwikiruscorpora_upos_skipgram_300_2_2018/алгоритм_NOUN/). Web pages at these URIs contain lists of the nearest semantic associates for the corresponding word, belonging to the same part of speech as the word itself. Other information about the word is also shown.
We also provide a simple API to get the list of semantic associate for a given word in a given model (one of those available via the web interface). There are two possible formats: json and csv. Perform GET requests to URLs following the pattern https://rusvectores.org/MODEL/WORD/api/FORMAT/ where MODEL is the identifier for the chosen model, WORD is the query word and FORMAT is "csv" or "json", depending on the output format you need. We will return a json file or a tab-separated text file with the first 10 associates.
Additionally, you can get semantic similarities for word pairs in any of the provided models via queries of the following format: https://rusvectores.org/MODEL/WORD1__WORD2/api/similarity/ (note 2 underscore signs).
We recommend to experiment with algebraic operations on vectors, as they return interesting results. For example, the model trained on the news corpus returns жизнедеятельность if we subtract любовь from жизнь.
Naturally, one can compare results from different models on one screen.
Apart from web interface, our service also features a bot in the Telegram messenger. You can ask question to this bot while commuting to your office/university, and it will send a query to the API. This can be convenient in the situations when you want to check upon an idea, but no laptop is available nearby.
We would like RusVectōrēs to become a hub of scholarly knowledge about word embedding models for Russian, that's why there is a section with published academic papers and links to other relevant resources. At the same time, we hope that RusVectōrēs will also popularize distributional semantics and computational linguistics, making it more understandable and attractive to the Russian-speaking public.
Tutorial explaining how text preprocessing is done, how to perform basic operations on word embeddings and how to use the RusVectōrēs API (in Russian, but with Python code).
You can also check a sister service for English and Norwegian.
Andrey Kutuzov's talk "Distributional semantic models and their applications" (workshop at the Institute for Systems Analysis of Russian Academy of Sciences, 3 March 2017), in Russian:
If you use RusVectōrēs, please cite this paper:
Kutuzov A., Kuzmenko E. (2017) WebVectors: A Toolkit for Building Web Interfaces for Vector Semantic Models. In: Ignatov D. et al. (eds) Analysis of Images, Social Networks and Texts. AIST 2016. Communications in Computer and Information Science, vol 661. Springer, Cham (pdf, bibtex)