TensorFlow 2.0 is both simple and flexible, focusing on features like:
Fast model design and high-level control with Keras
Estimator API for machine learning workflows, with premade models for regression, boosted trees, and random forests
Eager execution for imperative programming, with AutoGraph for taking advantage of graph execution
SavedModel for exporting trained models and deploying on any platform
全屏
First Tutorial
For our first lessons, we'll take a quick look at some MNIST examples with fully-connected and convolutional neural networks to get familiar with the core features of TensorFlow 2.0:
tf.keras: A high-level, object-oriented API for fast prototyping of deep learning models
tf.GradientTape: Records gradients on-the-fly for automatic differentiation and backprop
tf.function: Pre-compile computational graphs from python functions with AutoGraph
Fully-connected Network
For our first lesson, we'll train a fully-connected neural network for MNIST handwritten digit recognition. Let's start by setting up some methods to load MNIST from keras.datasets and preprocess them into rows of normalized 784-dimensional vectors.
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, optimizers, datasets
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'# or any {'0', '1', '2'}
defmnist_dataset():
(x, y), _ = datasets.mnist.load_data()
ds = tf.data.Dataset.from_tensor_slices((x, y))
ds = ds.map(prepare_mnist_features_and_labels)
ds = ds.take(20000).shuffle(20000).batch(100)
return ds
defprepare_mnist_features_and_labels(x, y):
x = tf.cast(x, tf.float32)/255.0
y = tf.cast(y, tf.int64)
return x, y
Now let's build our network as a keras.Sequential model and instantiate an ADAM optimizer from keras.optimizers.
With our data and model set up, we can start setting up the training procedure. For the methods here, we use the @tf.function AutoGraph decorator to pre-compile our methods as TensorFlow computational graphs. TensorFlow 2.0 is fully imperative, so the AutoGraph decorator isn't necessary for our code to work, but it speeds up execution and lets us take advantage of graph execution, so @tf.function is definitely worth using in our case.
Now that we've gotten our feet we with a simple DNN, let's try something more advanced. Although the process is the same, we'll be working with some additional features:
Convolution, pooling, and dropout layers for building more complex models
Visualizing training with TensorBoard
Validation and test set evaluation for measuring generalizability
Exporting with SavedModel to save training progress and deploy trained models
As usual, we'll start by preparing our MNIST data.
import os
import time
import numpy as np
import tensorflow as tf
from tensorflow.python.ops import summary_ops_v2
from tensorflow import keras
from tensorflow.keras import datasets, layers, models, optimizers, metrics
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'# or any {'0', '1', '2'}
Next, let's set up forward and backward functionality. In addition to the training procedure, we'll also write test() a method for evaluation, and use tf.python.summary_ops_v2 to record training summaries to TensorBoard.
Now that we have our data, model, and training procedure ready, we just need to designate a directory and create a tf.train.Checkpoint file to save our parameters to as we train.
# Where to save checkpoints, tensorboard summaries, etc.
MODEL_DIR ='/tmp/tensorflow/mnist'
defapply_clean():
if tf.io.gfile.exists(MODEL_DIR):
print('Removing existing model dir: {}'.format(MODEL_DIR))