深度学习

6月 232017
 
1. 准备工作
1.1 安装python
目前支持python3.6,直接下载anaconda3,
Anaconda 4.4.0 For Windows python3.6 64位
https://www.continuum.io/downloads
下载完后,直接安装,在配置环境变量环节,选择需要配置环境变量
1.2 下载 visual studio community 2017安装VC编译环境
https://www.visualstudio.com/zh-hans/, 下载 visual studio community 2017
安装的时候,在工作负载的tab页面,选择使用c++的桌面开发
环境变量设置
在PATH环境变量添加
D:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\bin\HostX64\x64
D:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE
新建LIB环境变量
D:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\lib\x64
新建INCLUDE环境变量
D:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\include
1.3 cuda安装
下载cuda https://developer.nvidia.com/cuda-downloads
直接安装,自动配置cuda相应的环境变量,可以去环境变量配置页面看看CUDA_PATH的设置
下载cuDNN,https://developer.nvidia.com/cudnn
cudnn-8.0-windows10-x64-v5.1,解压文件,将文件夹放到cuda的安装文件夹下。拷贝cudnn bin下的cudnn64_5.dll至
cuda安装文件夹下的bin下面。
2. tensorflow-gpu安装
pip install --upgrade -I setuptools
pip install --upgrade tensorflow-gpu
3. 测试
import tensorflow as tf
a = tf.constant(2.0, dtype=tf.float32)
b = tf.constant(3.0, dtype=tf.float32)
sum = tf.add(a, b)
with tf.Session() as sess:
    print(sess.run([sum]))
运行日志:
D:\Anaconda3\python.exe E:/PycharmProject/deeplearning/demo.py
2017-06-19 11:18:46.312109: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.312532: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.312924: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.313309: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.313686: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.314053: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.314408: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:46.314767: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-19 11:18:48.199218: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:940] Found device 0 with properties:
name: GeForce 940MX
major: 5 minor: 0 memoryClockRate (GHz) 1.189
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 3.36GiB
2017-06-19 11:18:48.199659: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:961] DMA: 0
2017-06-19 11:18:48.199868: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   Y
2017-06-19 11:18:48.200090: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0)
[5.0]
Process finished with exit code 0

 
 Posted by at 9:47 下午
6月 162017
 
keras vgg16 vgg19 resnet inception模型对比
# -*- coding: utf-8 -*-
# @DATE    : 2017/6/14 14:15
# @Author  : 
# @File    : classify_image.py

import os

import tensorflow as tf

from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np


MODEL =  {
    "vgg16": VGG16,
    "vgg19": VGG19,
    "inception": InceptionV3,
    "xception": Xception,
    "resnet": ResNet50
}

def load_preprocess_image(model_name, image_path):
    # vgg resnet
    input_shape = (224, 224)
    preprocess_func = imagenet_utils.preprocess_input

    if model_name in ["inception", "xception"]:
        input_shape = (299, 299)
        preprocess_func = preprocess_input

    # load image
    image = load_img(image_path, target_size=input_shape)

    # preprocess image
    image = img_to_array(image)
    image = np.expand_dims(image, axis=0)
    image = preprocess_func(image)

    return image


def image_model(model_name):
    network = MODEL[model_name]
    network = network(weights="imagenet")
    return network

def classify_image(model, image):
    preds = model.predict(image)
    preds = imagenet_utils.decode_predictions(preds)
    return preds


if __name__ == "__main__":
    data_dir = "data"
    image_names = ["ball.jpg"]
    image_paths = [ os.path.join(data_dir, image_name)  for image_name in image_names]
    model_names = ["vgg16", "vgg19", "resnet", "inception", "xception"]
    for image_path in image_paths:
        print("Image: {}".format(os.path.split(image_path)[-1]))
        for model_name in model_names:
            image = load_preprocess_image(model_name, image_path)
            model = image_model(model_name)
            preds = classify_image(model, image)
            print("Model: {}".format(model_name))
            print("Results(Top 5): ")
            for (i, (imagenetID, label, prob)) in enumerate(preds[0]):
                print("{}. {}: {:.2f}%".format(i + 1, label, prob * 100))


运行日志:



Using TensorFlow backend.
Image: ball.jpg
2017-06-16 19:50:03.635390: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-16 19:50:03.635405: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-16 19:50:03.635408: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-16 19:50:03.635413: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Model: vgg16
Results(Top 5): 
1. soccer_ball: 93.73%
2. rugby_ball: 5.88%
3. volleyball: 0.13%
4. golf_ball: 0.11%
5. tennis_ball: 0.04%
Model: vgg19
Results(Top 5): 
1. soccer_ball: 98.07%
2. rugby_ball: 1.24%
3. golf_ball: 0.32%
4. volleyball: 0.20%
5. croquet_ball: 0.07%
Model: resnet
Results(Top 5): 
1. soccer_ball: 99.54%
2. rugby_ball: 0.39%
3. volleyball: 0.06%
4. running_shoe: 0.00%
5. football_helmet: 0.00%
Model: inception
Results(Top 5): 
1. soccer_ball: 99.88%
2. volleyball: 0.11%
3. rugby_ball: 0.00%
4. sea_urchin: 0.00%
5. silky_terrier: 0.00%
Model: xception
Results(Top 5): 
1. soccer_ball: 90.85%
2. volleyball: 2.48%
3. rugby_ball: 1.37%
4. balloon: 0.11%
5. airship: 0.10%

参考:http://www.pyimagesearch.com/2017/03/20/imagenet-vggnet-resnet-inception-xception-keras/






 
 Posted by at 7:47 下午
6月 162017
 
Chinese-Character-Recognition https://github.com/soloice/Chinese-Character-Recognition
ImageNet: VGGNet, ResNet, Inception, and Xception with Keras http://www.pyimagesearch.com/2017/03/20/imagenet-vggnet-resnet-inception-xception-keras/
tensorflow large scale input 处理 参考cifar10 代码 https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_input.py
A Survey on Deep Learning in Medical Image Analysis https://arxiv.org/pdf/1702.05747.pdf
http://www.jeyzhang.com/understanding-lstm-network.html
Awesome Deep learning papers and other resources https://github.com/endymecy/awesome-deeplearning-resources
Multi-Scale Convolutional Neural Networks for Time Series Classification https://arxiv.org/pdf/1603.06995v4.pdf
lstm时间序列预测 https://github.com/RobRomijnders/LSTM_tsc
CNN时间序列预测 http://robromijnders.github.io/CNN_tsc/

 
 Posted by at 7:45 下午
6月 122017
 
深度学习课程 http://www.samuelcheng.info/deeplearning_2017/
docker入门到实践 https://www.gitbook.com/book/yeasy/docker_practice/details
LONG SHORT-TERM MEMORY http://www.bioinf.jku.at/publications/older/2604.pdf
http://seanlook.com/tags/docker/
docker run -ti --volume=$(pwd):/workspace caffe:cpu bash  启动docker 映射工作目录  docker环境切换到工作环境目录,先ctrl p,然后ctrl q;
容器生命周期管理 — docker [run|start|stop|restart|kill|rm|pause|unpause]
容器操作运维 — docker [ps|inspect|top|attach|events|logs|wait|export|port]
容器rootfs命令 — docker [commit|cp|diff]
镜像仓库 — docker [login|pull|push|search]
本地镜像管理 — docker [images|rmi|tag|build|history|save|import]
其他命令 — docker [info|version]
https://huangying-zhan.github.io/ faster rcnn , fast rcnn, rcnn
Transferrable Representations for Visual Recognition https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-106.pdf

https://yahooeng.tumblr.com/post/151148689421/open-sourcing-a-deep-learning-solution-for
1. macos 安装 docker,下载安装文件,https://store.docker.com/editions/community/docker-ce-desktop-mac 直接安装
2. 查看docker版本 docker --version
Docker version 17.03.1-ce, build c6d412e
3. 测试web服务
docker run -d -p 80:80 --name webserver nginx, 打开localhost网页
4. 下载nsfw代码,git clone https://github.com/yahoo/open_nsfw
5. 进入nsfw代码目录,cd open_nsfw/
6. 下载caffe的docker文件 wget https://github.com/BVLC/caffe/raw/master/docker/cpu/Dockerfile
7. 编译安装caffe镜像,docker build -t caffe:cpu ./
8. 启动docker
docker run -ti caffe:cpu caffe --version
9. 映射工作目录
docker run -ti --volume=$(pwd):/workspace caffe:cpu bash
10. 测试黄图识别
wget http://image.tianjimedia.com/uploadImages/2015/288/26/R99Q7A2345V5.jpg
mv R99Q7A2345V5.jpg test3.jpg
python ./classify_nsfw.py \
--model_def nsfw_model/deploy.prototxt \
--pretrained_model nsfw_model/resnet_50_1by2_nsfw.caffemodel \
test3.jpg
运行日志
I0605 12:36:59.237032    11 upgrade_proto.cpp:77] Attempting to upgrade batch norm layers using deprecated params: nsfw_model/resnet_50_1by2_nsfw.caffemodel
I0605 12:36:59.237094    11 upgrade_proto.cpp:80] Successfully upgraded batch norm layers using deprecated params.
I0605 12:36:59.242766    11 net.cpp:744] Ignoring source layer loss
NSFW score:   0.970513343811

https://github.com/alex-paterson/Barebones-Flask-and-Caffe-Classifier  


 
 Posted by at 6:05 下午
5月 282017
 
TensorFlow for Machine Intelligence https://github.com/backstopmedia/tensorflowbook
Picasso: A free open-source visualizer for Convolutional Neural Networks https://github.com/merantix/picasso
Network Dissection: Quantifying Interpretability of Deep Visual Representations http://netdissect.csail.mit.edu/
Google's TensorFlow numerical computation and machine learning library https://github.com/memo/ofxMSATensorFlow
tensorflow cnn http://upflow.co/l/msqr/2017/04/01/image-classification-using-convolutional-neural-networks-in-tensorflow
LSTM by Example using Tensorflow https://medium.com/towards-data-science/lstm-by-example-using-tensorflow-feb0c1968537
stanford imagenet dog dataset http://vision.stanford.edu/aditya86/ImageNetDogs/
GPU-accelerated Keras with Tensorflow or Theano on Windows 10 native https://github.com/philferriere/dlwin
Convolutional Neural Networks for Visual Recognition http://cs231n.stanford.edu/

 
 Posted by at 10:04 下午
5月 192017
 
2017 data science bowl 2nd https://github.com/dhammack/DSB2017/ https://github.com/juliandewit/kaggle_ndsb2017/
增强学习介绍 https://lufficc.com/blog/reinforcement-learning-and-implementation https://github.com/lufficc/dqn
Demystifying Deep Reinforcement Learning https://www.nervanasys.com/demystifying-deep-reinforcement-learning/
Santander Product Recommendation https://github.com/ttvand/Santander-Product-Recommendation
https://github.com/aaron-xichen/pytorch-playground Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
伯克利增强学习课程 http://rll.berkeley.edu/deeprlcourse/
DQN 从入门到放弃1 DQN与增强学习 https://zhuanlan.zhihu.com/p/21262246
alphago介绍 http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Resources_files/AlphaGo_IJCAI.pdf
geohash https://en.wikipedia.org/wiki/Geohash http://blog.csdn.net/zmx729618/article/details/53068170 http://www.cnblogs.com/dengxinglin/archive/2012/12/14/2817761.html
    com.spatial4j
    spatial4j
    0.5
    ch.hsr
    geohash
    1.3.0
http://blog.csdn.net/ghsau/article/details/50591932
deepmind papers https://deepmind.com/research/publications/
How we built Tagger News: machine learning on a tight schedule http://varianceexplained.org/programming/tagger-news/  https://github.com/dodger487
mnist data csv format https://pjreddie.com/projects/mnist-in-csv/
deep chatbots https://github.com/mckinziebrandon/DeepChatModels

 
 Posted by at 11:36 下午
4月 082017
 

https://github.com/kwotsin/awesome-deep-vision
from https://github.com/m2dsupsdlclass/lectures-labs
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. LeNet
Simonyan, Karen, and Zisserman. "Very deep convolutional networks for large-scale image recognition." (2014) VGG-16
Simplified version of Krizhevsky, Alex, Sutskever, and Hinton. "Imagenet classification with deep convolutional neural networks." NIPS 2012 AlexNet
He, Kaiming, et al. "Deep residual learning for image recognition." CVPR. 2016. ResNet
Szegedy, et al. "Inception-v4, inception-resnet and the impact of residual connections on learning." (2016)
Canziani, Paszke, and Culurciello. "An Analysis of Deep Neural Network Models for Practical Applications." (May 2016).

classification and localization
Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." CVPR (2016)
Liu, Wei, et al. "SSD: Single shot multibox detector." ECCV 2016
Girshick, Ross, et al. "Fast r-cnn." ICCV 2015
Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." NIPS 2015
Redmon, Joseph, et al. "YOLO9000, Faster, Better, Stronger." 2017

segmentation
Long, Jonathan, et al. "Fully convolutional networks for semantic segmentation." CVPR 2015
Noh, Hyeonwoo, et al. "Learning deconvolution network for semantic segmentation." ICCV 2015
Pinheiro, Pedro O., et al. "Learning to segment object candidates" / "Learning to refine object segments", NIPS 2015 / ECCV 2016
Li, Yi, et al. "Fully Convolutional Instance-aware Semantic Segmentation." Winner of COCO challenge 2016.

弱监督学习 Weak supervision
Joulin, Armand, et al. "Learning visual features from large weakly supervised data." ECCV, 2016
Oquab, Maxime, "Is object localization for free? – Weakly-supervised learning with convolutional neural networks", 2015

Self-supervised learning
Doersch, Carl, Abhinav Gupta, and Alexei A. Efros. "Unsupervised visual representation learning by context prediction." ICCV 2015.


dnn优化
Ren, Mengye, et al. "Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes." 2017
Salimans, Tim, and Diederik P. Kingma. "Weight normalization: A simple reparameterization to accelerate training of deep neural networks." NIPS 2016.
Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." 2016.
Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." ICML 2015
Generalization
Understanding deep learning requires rethinking generalization, C. Zhang et al., 2016.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, N. S. Keskar et al., 2016
1. A strong optimizer is not necessarily a strong learner.
2. DL optimization is non-convex but bad local minima and saddle structures are rarely a problem (on common DL tasks).
3. Neural Networks are over-parametrized but can still generalize.
4. Stochastic Gradient is a strong implicit regularizer.
5. Variance in gradient can help with generalization but can hurt final convergence.
6. We need more theory to guide the design of architectures and optimizers that make learning faster with fewer labels.
7. Overparametrize deep architectures
8. Design architectures to limit conditioning issues:
(1)Use skip / residual connections
(2)Internal normalization layers
(3)Use stochastic optimizers that are robust to bad conditioning
9. Use small minibatches (at least at the beginning of optimization)
10. Use validation set to anneal learning rate and do early stopping
11. Is it very often possible to trade more compute for less overfitting with data augmentation and stochastic regularizers (e.g. dropout).
12. Collecting more labelled data is the best way to avoid overfitting.


 
 Posted by at 4:04 下午
2月 272017
 
RNN预测股票价格小实验

RNN示例

基于basic rnn构建时间序列预测模型,预测股票价格趋势。本实例仅仅作为RNN学习实验。

代码


# -*- coding: utf-8 -*-
# @DATE    : 2017/2/14 17:50
# @Author  : 
# @File    : stock_predict.py

import os
import sys
import datetime

import tensorflow as tf
import pandas as pd
import numpy as np
from yahoo_finance import Share
import matplotlib.pyplot as plt

from utils import get_n_day_before, date_2_str


class StockRNN(object):
    def __init__(self, seq_size=12, input_dims=1, hidden_layer_size=12, stock_id="BABA", days=365, log_dir="stock_model/"):
        self.seq_size = seq_size
        self.input_dims = input_dims
        self.hidden_layer_size = hidden_layer_size
        self.stock_id = stock_id
        self.days = days
        self.data = self._read_stock_data()["Adj_Close"].astype(float).values
        self.log_dir = log_dir

    def _read_stock_data(self):
        stock = Share(self.stock_id)
        end_date = date_2_str(datetime.date.today())
        start_date = get_n_day_before(200)
        # print(start_date, end_date)

        his_data = stock.get_historical(start_date=start_date, end_date=end_date)
        stock_pd = pd.DataFrame(his_data)
        stock_pd["Adj_Close"] = stock_pd["Adj_Close"].astype(float)
        stock_pd.sort_values(["Date"], inplace=True, ascending=True)
        stock_pd.reset_index(inplace=True)
        return stock_pd[["Date", "Adj_Close"]]

    def _create_placeholders(self):
        with tf.name_scope(name="data"):
            self.X = tf.placeholder(tf.float32, [None, self.seq_size, self.input_dims], name="x_input")
            self.Y = tf.placeholder(tf.float32, [None, self.seq_size], name="y_input")

    def init_network(self, log_dir):
        print("Init RNN network")
        self.log_dir = log_dir
        self.sess = tf.Session()
        self.summary_op = tf.summary.merge_all()
        self.saver = tf.train.Saver()
        self.summary_writer = tf.summary.FileWriter(self.log_dir, self.sess.graph)
        self.sess.run(tf.global_variables_initializer())
        ckpt = tf.train.get_checkpoint_state(self.log_dir)
        if ckpt and ckpt.model_checkpoint_path:
            self.saver.restore(self.sess, ckpt.model_checkpoint_path)
            print("Model restore")

        self.coord = tf.train.Coordinator()
        self.threads = tf.train.start_queue_runners(self.sess, self.coord)

    def _create_rnn(self):
        W = tf.Variable(tf.random_normal([self.hidden_layer_size, 1], name="W"))
        b = tf.Variable(tf.random_normal([1], name="b"))
        with tf.variable_scope("cell_d"):
            cell = tf.contrib.rnn.BasicLSTMCell(self.hidden_layer_size)
        with tf.variable_scope("rnn_d"):
            outputs, states = tf.nn.dynamic_rnn(cell, self.X, dtype=tf.float32)

        W_repeated = tf.tile(tf.expand_dims(W, 0), [tf.shape(self.X)[0], 1, 1])
        out = tf.matmul(outputs, W_repeated) + b
        out = tf.squeeze(out)
        return out

    def _data_prepare(self):
        self.train_x = []
        self.train_y = []
        # data
        data = np.log1p(self.data)
        for i in xrange(len(data) - self.seq_size - 1):
            self.train_x.append(np.expand_dims(data[i: i + self.seq_size], axis=1).tolist())
            self.train_y.append(data[i + 1: i + self.seq_size + 1].tolist())

    def train_pred_rnn(self):

        self._create_placeholders()

        y_hat = self._create_rnn()
        self._data_prepare()
        loss = tf.reduce_mean(tf.square(y_hat - self.Y))
        train_optim = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
        feed_dict = {self.X: self.train_x, self.Y: self.train_y}

        saver = tf.train.Saver(tf.global_variables())
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for step in xrange(1, 20001):
                _, loss_ = sess.run([train_optim, loss], feed_dict=feed_dict)
                if step % 100 == 0:
                    print("{} {}".format(step, loss_))
            saver.save(sess, self.log_dir + "model.ckpt")

            # prediction
            prev_seq = self.train_x[-1]
            predict = []
            for i in range(5):
                next_seq = sess.run(y_hat, feed_dict={self.X: [prev_seq]})
                predict.append(next_seq[-1])
                prev_seq = np.vstack((prev_seq[1:], next_seq[-1]))
            predict = np.exp(predict) - 1
            print(predict)
            self.pred = predict

    def visualize(self):
        pred = self.pred
        plt.figure()
        plt.legend(prop={'family': 'SimHei', 'size': 15})
        plt.plot(list(range(len(self.data))), self.data, color='b')
        plt.plot(list(range(len(self.data), len(self.data) + len(pred))), pred, color='r')
        plt.title(u"{}股价预测".format(self.stock_id), fontproperties="SimHei")
        plt.xlabel(u"日期", fontproperties="SimHei")
        plt.ylabel(u"股价", fontproperties="SimHei")
        plt.savefig("stock.png")
        plt.show()


if __name__ == "__main__":
    stock = StockRNN()
    # print(stock.read_stock_data())
    log_dir = "stock_model"
    stock.train_pred_rnn()
    stock.visualize()

运行结果,预测BABA未来5天股价2017.2.27


[ 104.23436737  103.82189941  103.59770966  103.43360138  103.29838562]

部分数据2017-02-08,103.57 2017-02-09,103.339996 2017-02-10,102.360001 2017-02-13,103.099998 2017-02-14,101.589996 2017-02-15,101.550003 2017-02-16,100.82 2017-02-17,100.519997 2017-02-21,102.120003 2017-02-22,104.199997 2017-02-23,102.459999 2017-02-24,102.949997



 
 Posted by at 10:04 下午