Interacting with the Neural Network From https://prateekvjoshi.com/2016/02/09/deep-learning-with-caffe-in-python-part-ii-interacting-with-a-model/
We just created the “net” object to hold our convolutional neural network. You can access the names of input layers using “net.inputs”. You can see them by adding “print net.inputs” to your python file. This “net” object contains two dictionaries — net.blobs and net.params. Basically, net.blobs is for data in the layers and net.params is for the weights and biases in the network. You can check them out using dir(net.blobs) and dir(net.params). In this case, net.blobs[‘data’] would contain an array of shape (1, 1, 256, 256). Now why does it have 4 dimensions if we are dealing with a simple 2D grayscale image? The first ‘1’ refers to the number of images and the second ‘1’ refers to the number of channels in the image. Caffe uses this format for all data. If you check net.blobs[‘conv’], you’ll see that it contains the output of the ‘conv’ layer of shape (1, 10, 254, 254). If you run a 3×3 kernel over a 256×256 image, the output will be of size 254×254, which is what we get here. Let’s inspect the parameters: net.params[‘conv’] contains the weight parameters of our neurons. It’s an array of shape (10, 1, 3, 3) initialized with “weight_filler” parameters. In the prototxt file, we have specified “Gaussian” indicating that the kernel values will have Gaussian distribution. net.params[‘conv’] contains the bias parameters of our neurons. It’s an array of shape (10,) initialized with “bias_filler” parameters. In the prototxt file, we have specified “constant” indicating that the values will remain constant (0 in this case). Caffe handles data as “blobs”, which are basically memory abstraction objects. Our data is contained as an array in the field named ‘data’.
import sys sys.path.insert(0, '/path/to/caffe/python') import caffe import cv2 import numpy as np net = caffe.Net('myconvnet.prototxt', caffe.TEST) print "\nnet.inputs =", net.inputs print "\ndir(net.blobs) =", dir(net.blobs) print "\ndir(net.params) =", dir(net.params) print "\nconv shape = ", net.blobs['conv'].data.shape
img = cv2.imread('input_image.jpg', 0) img_blobinp = img[np.newaxis, np.newaxis, :, :] net.blobs['data'].reshape(*img_blobinp.shape) net.blobs['data'].data[...] = img_blobinp net.forward() for i in range(10): cv2.imwrite('output_image_' + str(i) + '.jpg', 255*net.blobs['conv'].data[0,i])
The “net” object will be populated now. If you check “net.blobs[‘conv’]”, you will see that it will be filled with data. We can plot the pictures inside each of the 10 neurons in the layer. If you want to plot the image in the nth neuron, you can do access it using net.blobs[‘conv’].data[0,n-1]. We just created 10 output files corresponding to the 10 neurons in the loop. Let’s check what the output from, say, the ninth neuron looks like: As we can see here, it’s an edge detector. You can check out the other files to see the different types of filters generated.