TensorFlow Mechanics 101

IT News 2015. 11. 10. 16:55

구글에서 Deep learning Library를 Open Source화 하였군요.

자세한 내용은 첨부된 내용들을 보면 될 것 같아요.

Python과 C++ 으로 Library 사용 가능~

소프트웨어 라이센스는 apache 2.0 입니다. 공개된 소스 코드를 이용해서 상업적인 프로그램을 개발하여 재패포 가능!!

 

TensorFlow Mechanics 101

Code: tensorflow/g3doc/tutorials/mnist/

The goal of this tutorial is to show how to use TensorFlow to train and evaluate a simple feed-forward neural network for handwritten digit classification using the (classic) MNIST data set. The intended audience for this tutorial is experienced machine learning users interested in using TensorFlow.

These tutorials are not intended for teaching Machine Learning in general.

Please ensure you have followed the instructions to Install TensorFlow.

Tutorial Files

This tutorial references the following files:

File Purpose
mnist.py The code to build a fully-connected MNIST model.
fully_connected_feed.py The main code, to train the built MNIST model against the downloaded dataset using a feed dictionary.

Simply run the fully_connected_feed.py file directly to start training:

python fully_connected_feed.py

Prepare the Data

MNIST is a classic problem in machine learning. The problem is to look at greyscale 28x28 pixel images of handwritten digits and determine which digit the image represents, for all the digits from zero to nine.

MNIST Digits

For more information, refer to Yann LeCun's MNIST page or Chris Olah's visualizations of MNIST.

Download

At the top of the run_training() method, the input_data.read_data_sets() function will ensure that the correct data has been downloaded to your local training folder and then unpack that data to return a dictionary of DataSet instances.

data_sets = input_data.read_data_sets(FLAGS.train_dir, FLAGS.fake_data)

NOTE: The fake_data flag is used for unit-testing purposes and may be safely ignored by the reader.

Dataset Purpose
data_sets.train 55000 images and labels, for primary training.
data_sets.validation 5000 images and labels, for iterative validation of training accuracy.
data_sets.test 10000 images and labels, for final testing of trained accuracy.

For more information about the data, please read the Download tutorial.

Inputs and Placeholders

The placeholder_inputs() function creates two tf.placeholder ops that define the shape of the inputs, including the batch_size, to the rest of the graph and into which the actual training examples will be fed.

images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
                                                       IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))

Further down, in the training loop, the full image and label datasets are sliced to fit the batch_sizefor each step, matched with these placeholder ops, and then passed into the sess.run() function using the feed_dict parameter.

Build the Graph

After creating placeholders for the data, the graph is built from the mnist.py file according to a 3-stage pattern: inference(), loss(), and training().

  1. inference() - Builds the graph as far as is required for running the network forward to make predictions.
  2. loss() - Adds to the inference graph the ops required to generate loss.
  3. training() - Adds to the loss graph the ops required to compute and apply gradients.


 

Detail Contents --> Getting Started - TensorFlow.pdf

 

 

Reference URL : http://tensorflow.org/

 

:     

TISTORY에 Login하려면 여기를 누르세요.