/documentation/3. ХЭРЭГЖҮҮЛЭЛТ.ipynb
Jupyter | 549 lines | 549 code | 0 blank | 0 comment | 0 complexity | d89369b8f23dc2b609d8b36bb85848c2 MD5 | raw file
- {
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# ХЭРЭГЖҮҮЛЭЛТ\n",
- "Кино сайт IMDB-ээс гаргаж авсан 50000 ширхэг хэрэглэгчдийн сэтгэгдэл, үнэлгээнээс бүрдсэн материалыг ашиглан хожим орж ирэх сэтгэгдлүүдийг ямар хандлагатай байгааг ангилах classifier-ыг logistic regression model ашиглан боловсруулсан [5].\n",
- "## Ашигласан технологи\n",
- " - Python 3\n",
- " - NumPy 1.11.0\n",
- " - SciPy 0.17.1\n",
- " - matplotlib 1.5.1\n",
- " - scikit-learn 0.17.1\n",
- " - nltk 3.2.1\n",
- " - Flask 0.10.1\n",
- "\n",
- "## Ажлын дараалал\n",
- " 1. Өгөгдөл унших\n",
- " 2. Текстийг цэвэрлэх\n",
- " 3. Logistic Regression ашиглан classifier бэлдэх\n",
- " 4. Бэлэн болсон classifier-ыг файлд хадгалах\n",
- " 5. Турших\n",
- " 6. Вэб аппликэйшн"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 1. Өгөгдөл унших\n",
- "Нийт 85mb өгөгдлийг http://ai.stanford.edu/~amaas/data/sentiment/ сайтаас татаж model бэлдэхэд тохирох байдлаар өөрчлөв."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "0% 100%\n",
- "[##############################] | ETA: 00:00:00\n",
- "Total time elapsed: 00:06:35\n"
- ]
- }
- ],
- "source": [
- "import pyprind\n",
- "import pandas\n",
- "import os\n",
- "\n",
- "basepath = '../aclImdb'\n",
- "\n",
- "labels = {'pos': 1, 'neg': 0}\n",
- "progressBar = pyprind.ProgBar(50000)\n",
- "dataFrame = pandas.DataFrame()\n",
- "for subpath1 in ('test', 'train'):\n",
- " for subpath2 in ('pos', 'neg'):\n",
- " path = os.path.join(basepath, subpath1, subpath2)\n",
- " for file in os.listdir(path):\n",
- " with open(os.path.join(path, file), 'r', encoding='utf-8') as inputFile:\n",
- " txt = inputFile.read()\n",
- " dataFrame = dataFrame.append([[txt, labels[subpath2]]], ignore_index=True)\n",
- " progressBar.update()\n",
- "dataFrame.columns = ['review', 'sentiment']"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Уншиж авсан өгөгдлөө санамсаргүй байдлаар холив."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import numpy\n",
- "\n",
- "numpy.random.seed(0)\n",
- "dataFrame = dataFrame.reindex(numpy.random.permutation(dataFrame.index))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Уншиж өөрчилсөн өгөгдлөө CSV болгон хадгалав."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "dataFrame.to_csv('../movie_data.csv', index=False, encoding='utf-8')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Бэлэн болсон өгөгдлийн хэсгээс хэвлэж үзэв."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {
- "collapsed": false,
- "scrolled": false
- },
- "outputs": [
- {
- "data": {
- "text/html": [
- "<div>\n",
- "<table border=\"1\" class=\"dataframe\">\n",
- " <thead>\n",
- " <tr style=\"text-align: right;\">\n",
- " <th></th>\n",
- " <th>review</th>\n",
- " <th>sentiment</th>\n",
- " </tr>\n",
- " </thead>\n",
- " <tbody>\n",
- " <tr>\n",
- " <th>0</th>\n",
- " <td>In 1974, the teenager Martha Moxley (Maggie Gr...</td>\n",
- " <td>1</td>\n",
- " </tr>\n",
- " </tbody>\n",
- "</table>\n",
- "</div>"
- ],
- "text/plain": [
- " review sentiment\n",
- "0 In 1974, the teenager Martha Moxley (Maggie Gr... 1"
- ]
- },
- "execution_count": 4,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "import pandas\n",
- "dataFrame = pandas.read_csv('../movie_data.csv', encoding='utf-8')\n",
- "dataFrame.head(1)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 2. Текстийг цэвэрлэх\n",
- "Хадгалж авсан movie_data.csv файлаас html tag-ууд, ангилалтанд ач холбогдолгүй үгс зэргийг ялгаж хасав."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Ач холбогдолгүй үгсийн бэлэн сан татаж ашиглав."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[nltk_data] Downloading package stopwords to\n",
- "[nltk_data] C:\\Users\\User\\AppData\\Roaming\\nltk_data...\n",
- "[nltk_data] Unzipping corpora\\stopwords.zip.\n"
- ]
- },
- {
- "data": {
- "text/plain": [
- "True"
- ]
- },
- "execution_count": 3,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "import nltk\n",
- "nltk.download('stopwords')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Текстээс хэрэггүй текстийг хасах функц"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import numpy\n",
- "import re\n",
- "from nltk.corpus import stopwords\n",
- "\n",
- "def tokenizer(text):\n",
- " text = re.sub('<[^>]*>', '', text)\n",
- " emoticons = re.findall('(?::|;|=)(?:-)?(?:\\)|\\(|D|P)', text.lower())\n",
- " text = re.sub('[\\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')\n",
- " tokenized = [w for w in text.split() if w not in stopwords.words('english')]\n",
- " return tokenized"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "movie_data.csv файлыг унших фунц"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 17,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "def stream_docs(path):\n",
- " with open(path, 'r', encoding='utf-8') as csv:\n",
- " # header-ийг алгасах\n",
- " next(csv)\n",
- " for line in csv:\n",
- " text, label = line[:-3], int(line[-2])\n",
- " yield text, label\n",
- "\n",
- "def get_minibatch(doc_stream, size):\n",
- " docs, y = [], []\n",
- " try:\n",
- " for _ in range(size):\n",
- " text, label = next(doc_stream)\n",
- " docs.append(text)\n",
- " y.append(label)\n",
- " except StopIteration:\n",
- " return None, None\n",
- " return docs, y"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 3. Logistic Regression ашиглан classifier бэлдэх\n",
- "Дээрх preprocessing функцуудээ ашиглан feature vector-оо ялгаж, scikit-learn сангийн SGDClassifier-аар logistic regression model-оо байгуулав."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 18,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "from sklearn.feature_extraction.text import HashingVectorizer\n",
- "from sklearn.linear_model import SGDClassifier\n",
- "\n",
- "vect = HashingVectorizer(decode_error='ignore', \n",
- " n_features=2**21,\n",
- " preprocessor=None, \n",
- " tokenizer=tokenizer)\n",
- "\n",
- "clf = SGDClassifier(loss='log', random_state=1, n_iter=1)\n",
- "doc_stream = stream_docs(path='../movie_data.csv')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Эхний 45000-аар train хийв."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 19,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "0% 100%\n",
- "[##############################] | ETA: 00:00:00\n",
- "Total time elapsed: 00:44:16\n"
- ]
- }
- ],
- "source": [
- "import pyprind\n",
- "progressbar = pyprind.ProgBar(45)\n",
- "\n",
- "classes = numpy.array([0, 1])\n",
- "for _ in range(45):\n",
- " X_train, y_train = get_minibatch(doc_stream, size=1000)\n",
- " if not X_train:\n",
- " break\n",
- " X_train = vect.transform(X_train)\n",
- " clf.partial_fit(X_train, y_train, classes=classes)\n",
- " progressbar.update()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Сүүлийн 5000-аар test хийв."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 20,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Нарийвчлал: 0.867\n"
- ]
- }
- ],
- "source": [
- "X_test, y_test = get_minibatch(doc_stream, size=5000)\n",
- "X_test = vect.transform(X_test)\n",
- "print('Нарийвчлал: %.3f' % clf.score(X_test, y_test))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 4. Бэлэн болсон classifier-ыг файлд хадгалах\n",
- "Програмыг нээх болгонд train хийх нь хугацаа алдсан ашиггүй үйлдэл учраас нэгэнт бэлэн болсон model-оо python pickle файл болгон хадгалаад дахин ашиглав."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 22,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "import pickle\n",
- "import os\n",
- "\n",
- "dest = os.path.join('../application', 'pkl_objects')\n",
- "if not os.path.exists(dest):\n",
- " os.makedirs(dest)\n",
- "\n",
- "pickle.dump(stopwords.words('english'), open(os.path.join(dest, 'stopwords.pkl'), 'wb'), protocol=4) \n",
- "pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'), protocol=4)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Мөн текстэн өгөгдлийг feature vector руу хөрвүүлэх функцийг тусдаа файлд хадгалав."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 23,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Writing ../application/vectorizer.py\n"
- ]
- }
- ],
- "source": [
- "%%writefile ../application/vectorizer.py\n",
- "from sklearn.feature_extraction.text import HashingVectorizer\n",
- "import re\n",
- "import os\n",
- "import pickle\n",
- "\n",
- "cur_dir = os.path.dirname(__file__)\n",
- "stop = pickle.load(open(\n",
- " os.path.join('../application', \n",
- " 'pkl_objects', \n",
- " 'stopwords.pkl'), 'rb'))\n",
- "\n",
- "def tokenizer(text):\n",
- " text = re.sub('<[^>]*>', '', text)\n",
- " emoticons = re.findall('(?::|;|=)(?:-)?(?:\\)|\\(|D|P)',\n",
- " text.lower())\n",
- " text = re.sub('[\\W]+', ' ', text.lower()) \\\n",
- " + ' '.join(emoticons).replace('-', '')\n",
- " tokenized = [w for w in text.split() if w not in stop]\n",
- " return tokenized\n",
- "\n",
- "vect = HashingVectorizer(decode_error='ignore',\n",
- " n_features=2**21,\n",
- " preprocessor=None,\n",
- " tokenizer=tokenizer)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 5. Турших"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 25,
- "metadata": {
- "collapsed": false
- },
- "outputs": [],
- "source": [
- "import os\n",
- "os.chdir('../application')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 26,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import pickle\n",
- "import re\n",
- "import os\n",
- "from vectorizer import vect\n",
- "\n",
- "clf = pickle.load(open(os.path.join('pkl_objects', 'classifier.pkl'), 'rb'))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Жишээ нь 'I hate it' гэж оруулахад classifier маань 97%-ийн магадлалтай 'дургүй/сөрөг' гэж ялгаж байна."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 28,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Ангилал: Дургүй\n",
- "Магадлал: 97.05%\n"
- ]
- }
- ],
- "source": [
- "import numpy as np\n",
- "label = {0:'Дургүй', 1:'Дуртай'}\n",
- "\n",
- "example = ['I hate this movie. Very bad']\n",
- "X = vect.transform(example)\n",
- "print('Ангилал: %s\\nМагадлал: %.2f%%' %\\\n",
- " (label[clf.predict(X)[0]], clf.predict_proba(X).max()*100))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 6. Вэб аппликэйшн"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Бэлэн болсон classifier-аа ашиглан хэрэглэгчийн сэтгэгдлийг авч автоматаар тухайн хэрэглэгчийн кинонд өгсөн хандлагыг /эерэг-сөрөг эсвэл дуртай-дургүй/ ялгах, мөн зөв ангилж байгаа эсэхээ дурын хэрэглэгчээр хянуулж /feedback/ тогтмол өөртөө training хийх боломжоор хангасан жижиг вэб аппликэйшн хөгжүүлэв."
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.5.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 0
- }