We all know the exact function of popular activation functions such as '

*', '*

**sigmoid****', '**

*tanh**', etc, and we can feed data to these functions to directly obtain their output. But how to do that via keras without explicitly specifying their functional forms?*

**relu**This can be done following the four steps below:

1. define a simple MLP model with a one dimension input data, a one neuron dense network as the hidden layer, and the output layer will have a '

**' activation function for one neuron.**

*linear*2. Extract layers' output of the model (fitted or not) via iterating through

**model.layers**

3. Using backend function

**K.function()**to obtain calculated output for a given input data

4. Feed desired data to the above functions to obtain the output from appropriate activation function.

The code below is a demo:

This figure is the output from above code. As we can see, the geometric property of each activation function is well captured.

from keras.layers import Dense, Activation

from keras.models import Sequential

import keras.backend as K

import numpy as np

import matplotlib.pyplot as plt

# 以下设置显示中文文方法根据 http://blog.csdn.net/rumswell/article/details/6544377

plt.rcParams['font.sans-serif'] = ['SimHei'] #指定默认字体

plt.rcParams['axes.unicode_minus'] = False #解决图像中中文符号显示为方块的问题

def NNmodel(activationFunc='linear'):

'''

定义一个神经网络模型。如果要定义不同的模型，可以直接修改该函数

'''

if (activationFunc=='softplus') | (activationFunc=='sigmoid'):

winit='lecun_uniform'

elif activationFunc=='hard_sigmoid':

winit='lecun_normal'

else:

winit='he_uniform'

model = Sequential()

model.add(Dense(1, input_shape=(1,), activation=activationFunc,

kernel_initializer=winit,

name='Hidden'))

model.add(Dense(1, activation='linear', name='Output'))

model.compile(loss='mse', optimizer='sgd')

return model

def VisualActivation(activationFunc='relu', plot=True):

x = (np.arange(100)-50)/10

y = np.log(x+x.max()+1)

model = NNmodel(activationFunc = activationFunc)

inX = model.input

outputs = [layer.output for layer in model.layers if layer.name=='Hidden']

functions = [K.function([inX], [out]) for out in outputs]

layer_outs = [func([x.reshape(-1, 1)]) for func in functions]

activationLayer = layer_outs[0][0]

activationDf = pd.DataFrame(activationLayer)

result=pd.concat([pd.DataFrame(x), activationDf], axis=1)

result.columns=['X', 'Activated']

result.set_index('X', inplace=True)

if plot:

result.plot(title=f)

return result

# Now we can visualize them (assuming default settings) :

actFuncs = ['linear', 'softmax', 'sigmoid', 'tanh', 'softsign', 'hard_sigmoid', 'softplus', 'selu', 'elu']

from keras.layers import LeakyReLU

figure = plt.figure()

for i, f in enumerate(actFuncs):

# 依次画图

figure.add_subplot(3, 3, i+1)

out=VisualActivation(activationFunc=f, plot=False)

plt.plot(out.index, out.Activated)

plt.title(u'激活函数：'+f)