楼主: Lisrelchen
1190 0

Gensim- Topic Modelling for Humans [推广有奖]

  • 0关注
  • 62粉丝

VIP

已卖:4194份资源

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50288 个
通用积分
83.6306
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

楼主
Lisrelchen 发表于 2016-7-18 06:52:12 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

The tutorials are organized as a series of examples that highlight various features of gensim. It is assumed that the reader is familiar with thePython language, has installed gensim and read the introduction.

The examples are divided into parts on:


Preliminaries

All the examples can be directly copied to your Python interpreter shell. IPython‘s cpaste command is especially handy for copypasting code fragments, including the leading >>> characters.

Gensim uses Python’s standard logging module to log various stuff at various priority levels; to activate logging (this is optional), run

  1. >>> import logging
  2. >>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
复制代码

Quick Example

First, let’s import gensim and create a small corpus of nine documents and twelve features [1]:

  1. >>> from gensim import corpora, models, similarities
  2. >>>
  3. >>> corpus = [[(0, 1.0), (1, 1.0), (2, 1.0)],
  4. >>>           [(2, 1.0), (3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (8, 1.0)],
  5. >>>           [(1, 1.0), (3, 1.0), (4, 1.0), (7, 1.0)],
  6. >>>           [(0, 1.0), (4, 2.0), (7, 1.0)],
  7. >>>           [(3, 1.0), (5, 1.0), (6, 1.0)],
  8. >>>           [(9, 1.0)],
  9. >>>           [(9, 1.0), (10, 1.0)],
  10. >>>           [(9, 1.0), (10, 1.0), (11, 1.0)],
  11. >>>           [(8, 1.0), (10, 1.0), (11, 1.0)]]
复制代码

In gensim a corpus is simply an object which, when iterated over, returns its documents represented as sparse vectors. In this case we’re using a list of list of tuples. If you’re not familiar with the vector space model, we’ll bridge the gap between raw strings, corpora and sparse vectorsin the next tutorial on Corpora and Vector Spaces.

If you’re familiar with the vector space model, you’ll probably know that the way you parse your documents and convert them to vectors has major impact on the quality of any subsequent applications.


Note

In this example, the whole corpus is stored in memory, as a Python list. However, the corpus interface only dictates that a corpus must support iteration over its constituent documents. For very large corpora, it is advantageous to keep the corpus on disk, and access its documents sequentially, one at a time. All the operations and transformations are implemented in such a way that makes them independent of the size of the corpus, memory-wise.


Next, let’s initialize a transformation:

  1. >>> tfidf = models.TfidfModel(corpus)
复制代码

A transformation is used to convert documents from one vector representation into another:

  1. >>> vec = [(0, 1), (4, 1)]
  2. >>> print(tfidf[vec])
  3. [(0, 0.8075244), (4, 0.5898342)]
复制代码

Here, we used Tf-Idf, a simple transformation which takes documents represented as bag-of-words counts and applies a weighting which discounts common terms (or, equivalently, promotes rare terms). It also scales the resulting vector to unit length (in the Euclidean norm).

Transformations are covered in detail in the tutorial on Topics and Transformations.

To transform the whole corpus via TfIdf and index it, in preparation for similarity queries:

  1. >>> index = similarities.SparseMatrixSimilarity(tfidf[corpus], num_features=12)
复制代码

and to query the similarity of our query vector vec against every document in the corpus:

  1. >>> sims = index[tfidf[vec]]
  2. >>> print(list(enumerate(sims)))
  3. [(0, 0.4662244), (1, 0.19139354), (2, 0.24600551), (3, 0.82094586), (4, 0.0), (5, 0.0), (6, 0.0), (7, 0.0), (8, 0.0)]
复制代码

How to read this output? Document number zero (the first document) has a similarity score of 0.466=46.6%, the second document has a similarity score of 19.1% etc.

Thus, according to TfIdf document representation and cosine similarity measure, the most similar to our query document vec is document no. 3, with a similarity score of 82.1%. Note that in the TfIdf representation, any documents which do not share any common features with vec at all (documents no. 4–8) get a similarity score of 0.0. See the Similarity Queries tutorial for more detail.

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Modelling modelli modell Humans Human examples familiar features Vector

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-25 18:02