Hi All Guys Today I’m going though the Mnist dataset to do the multiple classification ,

Yes, Please follow the tutotial and run the code , here I’m giving you a chinese version

the function  you should know before we move:

Softmax(), multiple classification activation function, take the sigmoid for example , which classify the binary problem(0,1),  herein the softmax would consider the value  to be 1 to whoes value is biggest, and others would be 0.

tf.argmax, find the index who output the max value

tf.equal(), compare two values to test if there are the same,

tf.cast,change the bollean in to the (0,1 )

 

 

 

data instruction

  
mnist.train  # train data 55000
mnist.test  #  test data  10000
mnist.validation #validation data 5000

#All of them have structure like 
mnist.train.images  [28x28]  
mnist.train.labels [0 1 2 3 4 5 6 7 8 9]

#for train data ,
mnist.train.images has a tensor[55000,784] 
mnist.train.labels has a tensor [784 10]

#for consistent ,we need reshape the pixs as one dimensional vector. 

#the concenpt of this flow would be to find the right Vector W and b to fit the train data and test the accuray. 
 

Define the Variables

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)#read the data from  MNIST

import tensorflow as tf 

import numpy as np 

#we need the placeholder for x and y, and variables for W and b   y_ would be the correct labes in data

 x=tf.placeholder(tf.floate32,[None,784]) #None means you could input whaterver you want number of pics 

 W=tf.Variable(tf.zeros[784,10]) #we see the 784 should the W , as for 10, I'm confuse too, the tutotial said that in order to produce the 10 dimensional class. 

 b=tf.Variable(tf.zeros[10])  # constan b (a 10 dimensional vector)

 y= tf.nn.softmax(tf.matmul(x,W)+b) # labels predicted

 y_=tf.placeholder(tf.float32,[None,10])# the correct labels 

# loss function 
cross_entropy=tf.reduce_mean(tf.reduce_sum(-y_*log(y),reduction_indices=[1]))

#init 
init=tf.initialize_all_variables()





# train 
trainstep=tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)


# session 

sess=tf.Session()

sess.run(init)

for i in range(1000)

      batch_xs,batch_ys=mnist.train.next_batch(100)

     sess.run(trainstep,feed_dict={x:batch_xs,y_:batch_ys}) # please be caution that the input is x and y_. y would be calucated by y=tf.nn.softmax(tf.matmul(x,W)+b)

#match the predicted and correct

correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(y_,1))

# change the boolean to (0,1)

accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32 ))

# test the data 

print(sess.run(trainstep,feed_dict={x:mnist.test.images,y_:mnist.test.labels}))



0.92  #注意上述是采用随机梯度下降计算的, 如果你使用全部数据,得到的是0.9216, 差别不大,这里提醒的是大量的数据不能显著的提高此分类性能, 通过正则化  可以获得0.9228的准确率。 亲测。