gensim word2vec


word2vec 原著



オプション 内容 備考
sg skip-gramを使用するかどうか sg=1ならskip-gram
size 単語ベクトルの次元数
alpha 初期学習率 学習の進行に伴ってmin_alphaに落ち着く
min_alpha 最小学習率
window 学習に使う前後の単語数 対象単語の前後n単語の学習を行うため、短いテキストのコーパスでnを大きくすると周辺単語というよりかはテキスト全体を類似単語として学習してしまうので注意
min_count n回未満登場する単語を破棄 大規模なコーパスにおいては自分で目処をつけて設定すると良い
max_vocab_size 最大語彙数?
sample 単語を無視する頻度
seed 乱数生成のための初期ベクトル
workers 複数のスレッドで処理
hs 学習に階層化ソフトマックスを使用するかどうか もしネガティブサンプリングを使用する場合はhs=0を設定しなければならない
negative ネガティブサンプリングに用いる単語数 hsを使わない場合に設定する。word2vecに与えたコーパスの語彙の中から対象単語の周辺に出現しない単語を、類似していない単語として学習させる
cbow_mean 単語ベクトルの平均ベクトルを使うか合計を使うか? cbow_mean=1なら平均ベクトル
hashfxn hashを使って乱数生成
iter トレーニング反復回数 複数試してみて、精度の良さそうな回数を探すのが良い
null_word ?
trim_rule 特定の単語を残すかどうか?
sorted_vocab 単語を頻出度順にソートする
batch_words 学習時のバッチサイズ(単語単位) 分かち書き後の単語数が設定単語数より小さいと警告がでるかも
compute_loss ?



model = word2vec.Word2Vec(sentences=None, size=100, alpha=0.025, window=5, min_count=5,max_vocab_size=None, sample=1e-3, seed=1, workers=3, min_alpha=0.0001,sg=0, hs=0, negative=5, cbow_mean=1, hashfxn=hash, iter=5, null_word=0,trim_rule=None, sorted_vocab=1, batch_words=MAX_WORDS_IN_BATCH, compute_loss=False)



 Initialize the model from an iterable of `sentences`. Each sentence is a
        list of words (unicode strings) that will be used for training.
        The `sentences` iterable can be simply a list
         class:`LineSentence` in
-sg defines the training algorithm. By default (`sg=0`), CBOW is used.
        Otherwise (`sg=1`), skip-gram is employed.
-size is the dimensionality of the feature vectors.
-window is the maximum distance between the current and predicted word within a sentence.
-alpha is the initial learning rate (will linearly drop to `min_alpha` as training progresses).
-seed = for the random number generator. Initial vectors for each
        word are seeded with a hash of the concatenation of word + str(seed).
        Note that for a fully deterministically-reproducible run, you must also limit the model to
        a single worker thread, to eliminate ordering jitter from OS thread scheduling. (In Python
        3, reproducibility between interpreter launches also requires use of the PYTHONHASHSEED
        environment variable to control hash randomization.)
-min_count = ignore all words with total frequency lower than this.
-max_vocab_size = limit RAM during vocabulary building; if there are more unique
        words than this, then prune the infrequent ones. Every 10 million word types
        need about 1GB of RAM. Set to `None` for no limit (default).
-sample = threshold for configuring which higher-frequency words are randomly downsampled;
            default is 1e-3, useful range is (0, 1e-5).
-workers = use this many worker threads to train the model (=faster training with multicore machines).
-hs` = if 1, hierarchical softmax will be used for model training.
        If set to 0 (default), and `negative` is non-zero, negative sampling will be used.
-negative = if > 0, negative sampling will be used, the int for negative
        specifies how many "noise words" should be drawn (usually between 5-20).
        Default is 5. If set to 0, no negative samping is used.
-cbow_mean = if 0, use the sum of the context word vectors. If 1 (default), use the mean.
        Only applies when cbow is used.
-hashfxn = hash function to use to randomly initialize weights, for increased
        training reproducibility. Default is Python's rudimentary built in hash function.
-iter = number of iterations (epochs) over the corpus. Default is 5.
-trim_rule = vocabulary trimming rule, specifies whether certain words should remain
        in the vocabulary, be trimmed away, or handled using the default (discard if word count < min_count).
        Can be None (min_count will be used), or a callable that accepts parameters (word, count, min_count) and
        returns either `utils.RULE_DISCARD`, `utils.RULE_KEEP` or `utils.RULE_DEFAULT`.
        Note: The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part
        of the model.
-sorted_vocab = if 1 (default), sort the vocabulary by descending frequency before
        assigning word indexes.
-batch_words = target size (in words) for batches of examples passed to worker threads (and
        thus cython routines). Default is 10000. (Larger batches will be passed if individual
        texts are longer than 10000 words, but the standard cython code truncates to that maximum.)


Riki Akagi

2019年からDeNAで働いています。GCP(CloudSQL・GAE・Cloud Function etc)とGoでAPI開発に勤んでいます。睡眠やエンジニアリングに関することに興味を持って過ごしているのでその情報を皆さんに共有していけたらなと思っています。