Pro Deep Learning with TensorFlow: A Mathematical Approach to Advanced Artificial Intelligence in PythonDeploy deep learning solutions in production with ease using TensorFlow. You'll also develop the mathematical understanding and intuition required to invent new deep learning architectures and solutions on your own. Pro Deep Learning with TensorFlow provides practical, hands-on expertise so you can learn deep learning from scratch and deploy meaningful deep learning solutions. This book will allow you to get up to speed quickly using TensorFlow and to optimize different deep learning architectures. All of the practical aspects of deep learning that are relevant in any industry are emphasized in this book. You will be able to use the prototypes demonstrated to build new deep learning applications. The code presented in the book is available in the form of iPython notebooks and scripts which allow you to try out examples and extend them in interesting ways. You will be equipped with the mathematical foundation and scientific knowledge to pursue research in this field and give back to the community. What You'll Learn
Data scientists and machine learning professionals, software developers, graduate students, and open source enthusiasts |
Contents
Mathematical Foundations | 1 |
Introduction to DeepLearning Concepts and TensorFlow | 89 |
Convolutional Neural Networks | 153 |
Natural Language Processing Using Recurrent Neural Networks | 223 |
Other editions - View all
Pro Deep Learning with TensorFlow: A Mathematical Approach to Advanced ... Santanu Pattanayak No preview available - 2017 |
Common terms and phrases
algorithm auto-encoder Average Accuracy Average Loss backpropagation batch batch_size Boltzmann machines classification co-occurrence components computed convolutional layers convolutional neural network convolving corresponding cost function Cross entropy data points dataset deep learning Define denotes dot product ê ê ê Eigenvalues Eigenvectors embeddings Epoch expressed filter kernel follows fully connected layer Gibbs sampling hence hidden layer hidden units hyperplane implementation iteration l2 norm learning rate likelihood linear LSTM machine learning machine-learning matplotlib matrix max pooling maximize method Minibatch Loss minima minimax MNIST model parameters neuron non-linear numpy optimization output layer payoff Perceptron pixel intensities pre-trained Predicted probability distribution problem random recurrent neural network regression represented restricted Boltzmann machines segmentation sess signal SoftMax step stochastic gradient descent threshold train train train Training Accuracy ú ú ú update upsampling Validation Cost validation validation validation Variable(tf Variable(tf.random variables variance weight vector word vectors zero