深度学习TensorFlow基本数据类型及回归算法深入实践-Coding技术进阶实战

栏目: 数据库 · 发布时间: 5年前

内容简介:秦凯新技术社区推出的《Coding技术进阶实战》系列即将上线,包含语言类精深的用法和技巧,涵盖 python,Java,Scala,Tensorflow等主流大数据和深度学习技术基础,敬请期待。为什么我会写这样一个系列,来源于被一位容器云专家问到如何实现一个线程池时,让我顿感以前研究的Java并发控制相关的理论以及多线程并发设计模式忘得九霄云外,鉴于此,气愤难平,决定展示个人编程魅力。版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建

秦凯新技术社区推出的《Coding技术进阶实战》系列即将上线,包含语言类精深的用法和技巧,涵盖 python,Java,Scala,Tensorflow等主流大数据和深度学习技术基础,敬请期待。为什么我会写这样一个系列,来源于被一位容器云专家问到如何实现一个线程池时,让我顿感以前研究的 Java 并发控制相关的理论以及多线程并发 设计模式 忘得九霄云外,鉴于此,气愤难平,决定展示个人编程魅力。

版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。QQ邮箱地址:1120746959@qq.com,如有任何技术交流,可随时联系。

1 TensorFlow基本使用操作

  • TensorFlow基本模型

    import tensorflow as tf
      a = 3
      # Create a variable.
      w = tf.Variable([[0.5,1.0]])
      x = tf.Variable([[2.0],[1.0]]) 
      
      y = tf.matmul(w, x)  
      
      #variables have to be explicitly initialized before you can run Ops
      init_op = tf.global_variables_initializer()
      with tf.Session() as sess:
          sess.run(init_op)
          print (y.eval())
    复制代码
  • TensorFlow基本数据类型

    # float32
      tf.zeros([3, 4], int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
      
      # 'tensor' is [[1, 2, 3], [4, 5, 6]]
      tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
      tf.ones([2, 3], int32) ==> [[1, 1, 1], [1, 1, 1]]
      
      # 'tensor' is [[1, 2, 3], [4, 5, 6]]
      tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
      
      # Constant 1-D Tensor populated with value list.
      tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]
      
      # Constant 2-D tensor populated with scalar value -1.
      tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]
                                                    [-1. -1. -1.]]
      
      tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0  11.0  12.0]
      
      # 'start' is 3
      # 'limit' is 18
      # 'delta' is 3
      tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
    复制代码
  • random_shuffle算子及random_normal算子

    norm = tf.random_normal([2, 3], mean=-1, stddev=4)
      
      # Shuffle the first dimension of a tensor
      c = tf.constant([[1, 2], [3, 4], [5, 6]])
      shuff = tf.random_shuffle(c)
      
      # Each time we run these ops, different results are generated
      sess = tf.Session()
      print (sess.run(norm))
      print (sess.run(shuff))
      
      [[-0.30886292  3.11809683  3.29861784]
       [-7.09597015 -1.89811802  1.75282788]]
      
      [[3 4]
       [5 6]
       [1 2]]
    复制代码
  • 简单操作的复杂性

    state = tf.Variable(0)
      new_value = tf.add(state, tf.constant(1))
      update = tf.assign(state, new_value)
      
      with tf.Session() as sess:
          sess.run(tf.global_variables_initializer())
          print(sess.run(state))    
          for _ in range(3):
              sess.run(update)
              print(sess.run(state))
    复制代码
  • 模型的保存与加载

    #tf.train.Saver
      w = tf.Variable([[0.5,1.0]])
      x = tf.Variable([[2.0],[1.0]])
      y = tf.matmul(w, x)
      init_op = tf.global_variables_initializer()
      saver = tf.train.Saver()
      with tf.Session() as sess:
          sess.run(init_op)
      # Do some work with the model.
      # Save the variables to disk.
          save_path = saver.save(sess, "C://tensorflow//model//test")
          print ("Model saved in file: ", save_path)
    复制代码
  • numpy与TensorFlow互转

    import numpy as np
      a = np.zeros((3,3))
      ta = tf.convert_to_tensor(a)
      with tf.Session() as sess:
           print(sess.run(ta))
    复制代码
  • TensorFlow占坑操作

    input1 = tf.placeholder(tf.float32)
      input2 = tf.placeholder(tf.float32)
      output = tf.mul(input1, input2)
      with tf.Session() as sess:
          print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
    复制代码

2 TensorFlow线性回归实现

  • numpy线性数据集生成

    import numpy as np
      import tensorflow as tf
      import matplotlib.pyplot as plt
      
      # 随机生成1000个点,围绕在y=0.1x+0.3的直线周围
      num_points = 1000
      vectors_set = []
      for i in range(num_points):
          x1 = np.random.normal(0.0, 0.55)
          y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
          vectors_set.append([x1, y1])
      
      # 生成一些样本
      x_data = [v[0] for v in vectors_set]
      y_data = [v[1] for v in vectors_set]
      
      plt.scatter(x_data,y_data,c='r')
      plt.show()
    复制代码
深度学习TensorFlow基本数据类型及回归算法深入实践-Coding技术进阶实战
  • TensorFlow实现线性模型

    生成1维的W矩阵,取值是[-1,1]之间的随机数
      W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')
      # 生成1维的b矩阵,初始值是0
      b = tf.Variable(tf.zeros([1]), name='b')
      # 经过计算得出预估值y
      y = W * x_data + b
      
      # Loss: 以预估值y和实际值y_data之间的均方误差作为损失
      loss = tf.reduce_mean(tf.square(y - y_data), name='loss')
      # 优化器:采用梯度下降法来优化参数(train模块,参数表示学习率)
      optimizer = tf.train.GradientDescentOptimizer(0.5)
      
      # 开始训练:训练的过程就是最小化这个误差值
      train = optimizer.minimize(loss, name='train')
      
      sess = tf.Session()
      
      init = tf.global_variables_initializer()
      sess.run(init)
      
      # 初始化的W和b是多少
      print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
      # 执行20次训练
      for step in range(20):
          sess.run(train)
          # 输出训练好的W和b
          print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
      writer = tf.train.SummaryWriter("./tmp", sess.graph)
    复制代码
  • TensorFlow迭代结果

    W = [ 0.96539688] b = [ 0.] loss = 0.297884
      W = [ 0.71998411] b = [ 0.28193575] loss = 0.112606
      W = [ 0.54009342] b = [ 0.28695393] loss = 0.0572231
      W = [ 0.41235447] b = [ 0.29063231] loss = 0.0292957
      W = [ 0.32164571] b = [ 0.2932443] loss = 0.0152131
      W = [ 0.25723246] b = [ 0.29509908] loss = 0.00811188
      W = [ 0.21149193] b = [ 0.29641619] loss = 0.00453103
      W = [ 0.17901111] b = [ 0.29735151] loss = 0.00272536
      W = [ 0.15594614] b = [ 0.29801565] loss = 0.00181483
      W = [ 0.13956745] b = [ 0.29848731] loss = 0.0013557
      W = [ 0.12793678] b = [ 0.29882219] loss = 0.00112418
      W = [ 0.11967772] b = [ 0.29906002] loss = 0.00100743
      W = [ 0.11381286] b = [ 0.29922891] loss = 0.000948558
      W = [ 0.10964818] b = [ 0.29934883] loss = 0.000918872
      W = [ 0.10669079] b = [ 0.29943398] loss = 0.000903903
      W = [ 0.10459071] b = [ 0.29949448] loss = 0.000896354
      W = [ 0.10309943] b = [ 0.29953739] loss = 0.000892548
      W = [ 0.10204045] b = [ 0.29956791] loss = 0.000890629
      W = [ 0.10128847] b = [ 0.29958954] loss = 0.000889661
      W = [ 0.10075447] b = [ 0.29960492] loss = 0.000889173
      W = [ 0.10037527] b = [ 0.29961586] loss = 0.000888927
    
      plt.scatter(x_data,y_data,c='r')
      plt.plot(x_data,sess.run(W)*x_data+sess.run(b))
      plt.show()
    复制代码
  • 版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。QQ邮箱地址:1120746959@qq.com,如有任何学术交流,可随时联系。

    深度学习TensorFlow基本数据类型及回归算法深入实践-Coding技术进阶实战

3 MNIST数据集加载介绍

  • 加载

    import numpy as np
      import tensorflow as tf
      import matplotlib.pyplot as plt
      #from tensorflow.examples.tutorials.mnist import input_data
      import input_data
      
      print ("packs loaded")
      
      print ("Download and Extract MNIST dataset")
      ##使用one_hot 01编码
      mnist = input_data.read_data_sets('data/', one_hot=True)
      print
      print (" tpye of 'mnist' is %s" % (type(mnist)))
      print (" number of trian data is %d" % (mnist.train.num_examples))
      print (" number of test data is %d" % (mnist.test.num_examples))
      
      Download and Extract MNIST dataset
      Extracting data/train-images-idx3-ubyte.gz
      Extracting data/train-labels-idx1-ubyte.gz
      Extracting data/t10k-images-idx3-ubyte.gz
      Extracting data/t10k-labels-idx1-ubyte.gz
       tpye of 'mnist' is <class 'tensorflow.contrib.learn.python.learn.datasets.base.Datasets'>
       number of trian data is 55000
       number of test data is 10000
    复制代码
  • What does the data of MNIST look like?

    print ("What does the data of MNIST look like?")
      trainimg   = mnist.train.images
      trainlabel = mnist.train.labels
      testimg    = mnist.test.images
      testlabel  = mnist.test.labels
      print
      print (" type of 'trainimg' is %s"    % (type(trainimg)))
      print (" type of 'trainlabel' is %s"  % (type(trainlabel)))
      print (" type of 'testimg' is %s"     % (type(testimg)))
      print (" type of 'testlabel' is %s"   % (type(testlabel)))
      print (" shape of 'trainimg' is %s"   % (trainimg.shape,))
      print (" shape of 'trainlabel' is %s" % (trainlabel.shape,))
      print (" shape of 'testimg' is %s"    % (testimg.shape,))
      print (" shape of 'testlabel' is %s"  % (testlabel.shape,))
    
    
      What does the data of MNIST look like?
       type of 'trainimg' is <class 'numpy.ndarray'>
       type of 'trainlabel' is <class 'numpy.ndarray'>
       type of 'testimg' is <class 'numpy.ndarray'>
       type of 'testlabel' is <class 'numpy.ndarray'>
       shape of 'trainimg' is (55000, 784)
       shape of 'trainlabel' is (55000, 10)
       shape of 'testimg' is (10000, 784)
       shape of 'testlabel' is (10000, 10)
    复制代码
  • How does the training data look like?

    # How does the training data look like?
      print ("How does the training data look like?")
      nsample = 5
      randidx = np.random.randint(trainimg.shape[0], size=nsample)
      
      for i in randidx:
          curr_img   = np.reshape(trainimg[i, :], (28, 28)) # 28 by 28 matrix 
          curr_label = np.argmax(trainlabel[i, :] ) # Label
          plt.matshow(curr_img, cmap=plt.get_cmap('gray'))
          plt.title("" + str(i) + "th Training Data " 
                    + "Label is " + str(curr_label))
          print ("" + str(i) + "th Training Data " 
                 + "Label is " + str(curr_label))
          plt.show()
    复制代码
深度学习TensorFlow基本数据类型及回归算法深入实践-Coding技术进阶实战
  • Batch Learning?

    print ("Batch Learning? ")
     batch_size = 100
     batch_xs, batch_ys = mnist.train.next_batch(batch_size)
     print ("type of 'batch_xs' is %s" % (type(batch_xs)))
     print ("type of 'batch_ys' is %s" % (type(batch_ys)))
     print ("shape of 'batch_xs' is %s" % (batch_xs.shape,))
     print ("shape of 'batch_ys' is %s" % (batch_ys.shape,))
    
     Batch Learning? 
     type of 'batch_xs' is <class 'numpy.ndarray'>
     type of 'batch_ys' is <class 'numpy.ndarray'>
     shape of 'batch_xs' is (100, 784)
     shape of 'batch_ys' is (100, 10)
    复制代码

4 MNIST数据集逻辑回归测试

  • tensorflow的tf.reduce_mean函数

    m1 = tf.reduce_mean(x, axis=0)
      结果为:[1.5, 1.5]
    复制代码
  • tensorflow的argmaxtensorflow的 sess = tf.InteractiveSession()

    arr = np.array([[31, 23,  4, 24, 27, 34],
                      [18,  3, 25,  0,  6, 35],
                      [28, 14, 33, 22, 20,  8],
                      [13, 30, 21, 19,  7,  9],
                      [16,  1, 26, 32,  2, 29],
                      [17, 12,  5, 11, 10, 15]])
      
      #打印加上eval 
      ## 矩阵的维度 2
      #tf.rank(arr).eval()
      
      ## 矩阵行和列 [6,6]
      #tf.shape(arr).eval()
      
      # 参数0表示维度,按照列。  表示最每列最大值的索引 [0,3,2,4,0,1]
      #tf.argmax(arr, 0).eval()
      # 0 -> 31 (arr[0, 0])
      # 3 -> 30 (arr[3, 1])
      # 2 -> 33 (arr[2, 2])
      tf.argmax(arr, 1).eval()
      # 5 -> 34 (arr[0, 5])
      # 5 -> 35 (arr[1, 5])
      # 2 -> 33 (arr[2, 2])
    
      array([5, 5, 2, 1, 3, 0], dtype=int64)
    复制代码
  • 加载数据集

    import numpy as np
      import tensorflow as tf
      import matplotlib.pyplot as plt
      import input_data
      
      mnist      = input_data.read_data_sets('data/', one_hot=True)
      trainimg   = mnist.train.images
      trainlabel = mnist.train.labels
      testimg    = mnist.test.images
      testlabel  = mnist.test.labels
      print ("MNIST loaded")
      
      Extracting data/train-images-idx3-ubyte.gz
      Extracting data/train-labels-idx1-ubyte.gz
      Extracting data/t10k-images-idx3-ubyte.gz
      Extracting data/t10k-labels-idx1-ubyte.gz
      MNIST loaded
      
      print (trainimg.shape)
      print (trainlabel.shape)
      print (testimg.shape)
      print (testlabel.shape)
      #print (trainimg)
      print (trainlabel[0])
      
      (55000, 784)
      (55000, 10)
      (10000, 784)
      (10000, 10)
      [ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.]
    复制代码
  • TF逻辑回归模型构建

    # 先放坑(每一行是一个样本)
      x = tf.placeholder("float", [None, 784])
      # 总共10位 [ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.]
      y = tf.placeholder("float", [None, 10])  # None is for infinite 
      
      #10分类任务 784输入,10代表输出
      W = tf.Variable(tf.zeros([784, 10]))
      
      # 10代表输出
      b = tf.Variable(tf.zeros([10]))
      
      # LOGISTIC REGRESSION MODEL(输出为10)
      actv = tf.nn.softmax(tf.matmul(x, W) + b) 
      
      # COST FUNCTION(损失函数)
      cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(actv), reduction_indices=1)) 
      
      # OPTIMIZER
      learning_rate = 0.01
      optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    复制代码
  • TF模型训练

    ##迭代次数
      training_epochs = 50
      每次迭代多少样本
      batch_size      = 100
      display_step    = 5
      # SESSION
      sess = tf.Session()
      sess.run(init)
      # MINI-BATCH LEARNING
      for epoch in range(training_epochs):
          avg_cost = 0.
          num_batch = int(mnist.train.num_examples/batch_size)
          for i in range(num_batch): 
              batch_xs, batch_ys = mnist.train.next_batch(batch_size)
              sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
              feeds = {x: batch_xs, y: batch_ys}
              avg_cost += sess.run(cost, feed_dict=feeds)/num_batch
          # DISPLAY
          if epoch % display_step == 0:
              feeds_train = {x: batch_xs, y: batch_ys}
              feeds_test = {x: mnist.test.images, y: mnist.test.labels}
              train_acc = sess.run(accr, feed_dict=feeds_train)
              test_acc = sess.run(accr, feed_dict=feeds_test)
              print ("Epoch: %03d/%03d cost: %.9f train_acc: %.3f test_acc: %.3f" 
                     % (epoch, training_epochs, avg_cost, train_acc, test_acc))
      print ("DONE")  
      
      Epoch: 000/050 cost: 1.177906594 train_acc: 0.840 test_acc: 0.855
      Epoch: 005/050 cost: 0.440515266 train_acc: 0.860 test_acc: 0.895
      Epoch: 010/050 cost: 0.382895913 train_acc: 0.910 test_acc: 0.905
      Epoch: 015/050 cost: 0.356607343 train_acc: 0.870 test_acc: 0.909
      Epoch: 020/050 cost: 0.341326642 train_acc: 0.860 test_acc: 0.912
      Epoch: 025/050 cost: 0.330556413 train_acc: 0.910 test_acc: 0.913
      Epoch: 030/050 cost: 0.321508561 train_acc: 0.840 test_acc: 0.916
      Epoch: 035/050 cost: 0.314936944 train_acc: 0.940 test_acc: 0.917
      Epoch: 040/050 cost: 0.309805418 train_acc: 0.940 test_acc: 0.918
      Epoch: 045/050 cost: 0.305343132 train_acc: 0.960 test_acc: 0.918
      DONE
    复制代码

5 总结

通过简单的案例,真正明白TensorFlow设计思想,才是本文的目的。

版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。QQ邮箱地址:1120746959@qq.com,如有任何学术交流,可随时联系。

秦凯新 于深圳 201812092128


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

计算的本质

计算的本质

[英] Tom Stuart / 张伟 / 人民邮电出版社 / 2014-11 / 69.00元

《计算的本质:深入剖析程序和计算机》借助Ruby全面介绍计算理论和编程语言的设计。作者注重实用性,不仅尽量抛开复杂难懂的数学符号,而且特别选用简单快捷的编程语言Ruby,在读者熟知的背景知识下,以明晰的可工作代码阐明形式语义、自动机理论,以及通过lambda演算进行函数式编程等计算机科学知识,并为让其自行探索做足准备。 本书适合计算机科学系学生,以及熟知现代编程语言,想要系统地学习计算机科学......一起来看看 《计算的本质》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具