deep learning

1月 062018
 

Deep learning is not synonymous with artificial intelligence (AI) or even machine learning. Artificial Intelligence is a broad field which aims to "automate cognitive processes." Machine learning is a subfield of AI that aims to automatically develop programs (called models) purely from exposure to training data.

Deep Learning and AI

Deep learning is one of many branches of machine learning, where the models are long chains of geometric functions, applied one after the other to form stacks of layers. It is one among many approaches to machine learning but not on equal footing with the others.

What makes deep learning exceptional

Why is deep learning unequaled among machine learning techniques? Well, deep learning has achieved tremendous success in a wide range of tasks that have historically been extremely difficult for computers, especially in the areas of machine perception. This includes extracting useful information from images, videos, sound, and others.

Given sufficient training data (in particular, training data appropriately labelled by humans), it’s possible to extract from perceptual data almost anything that a human could extract. Large corporations and businesses are deriving value from deep learning by enabling human-level speech recognition, smart assistants, human-level image classification, vastly improved machine translation, and more. Google Now, Amazon Alexa, ad targeting used by Google, Baidu and Bing are all powered by deep learning. Think of superhuman Go playing and near-human-level autonomous driving.

In the summer of 2016, an experimental short movie, Sunspring, was directed using a script written by a long short-term memory (LSTM) algorithm a type of deep learning algorithm.

How to build deep learning models

Given all this success recorded using deep learning, it's important to stress that building deep learning models is more of an art than science. To build a deep learning or any machine learning model for that matter one need to consider the following steps:

  • Define the problem: What data does the organisation have? What are we trying to predict? Do we need to collect more data? How can we manually label the data? Make sure to work with domain expert because you can’t interpret what you don’t know!
  • What metrics can we use to reliably measure the success of our goals.
  • Prepare validation process that will be used to evaluate the model.
  • Data exploration and pre-processing: This is where most time will be spent such as normalization, manipulation, joining of multiple data sources and so on.
  • Develop an initial model that does better than a baseline model. This gives some indication of whether machine learning is ideal for the problem.
  • Refine model architecture by tuning hyperparameters and adding regularization. Make changes based on validation data.
  • Avoid overfitting.
  • Once happy with the model, deploy it into production environment. This may be difficult to achieve for many organisations giving that a deep learning score code is large. This is where SAS can help. SAS has developed a scoring mechanism called "astore" which allows deep learning method to be pushed into production with just a click.

Is the deep learning hype justified?

We're still in the middle of deep learning revolution trying to understand the limitations of this algorithm. Due to its unprecedented successes, there has been a lot of hype in the field of deep learning and AI. It’s important for managers, professionals, researchers and industrial decision makers to be able to distill this hype from reality created by the media.

Despite the progress on machine perception, we are still far from human level AI. Our models can only perform local generalization, adapting to new situations that must be similar to past data, whereas human cognition is capable of extreme generalization, quickly adapting to radically novel situations and planning for long-term future situations. To make this concrete, imagine you’ve developed a deep network controlling a human body, and you wanted it to learn to safely navigate a city without getting hit by cars, the net would have to die many thousands of times in various situations until it could infer that cars are dangerous, and develop appropriate avoidance behaviors. Dropped into a new city, the net would have to relearn most of what it knows. On the other hand, humans are able to learn safe behaviors without having to die even once—again, thanks to our power of abstract modeling of hypothetical situations.

Lastly, remember deep learning is a long chain of geometrical functions. To learn its parameters via gradient descent one key technical requirements is that it must be differentiable and continuous which is a significant constraint.

Looking beyond the AI and deep learning hype was published on SAS Users.

12月 222017
 
In keras, we can visualize activation functions' geometric properties using backend functions over layers of a model.

We all know the exact function of popular activation functions such as 'sigmoid', 'tanh', 'relu', etc, and we can feed data to these functions to directly obtain their output. But how to do that via keras without explicitly specifying their functional forms?

This can be done following the four steps below:

1. define a simple MLP model with a one dimension input data, a one neuron dense network as the hidden layer, and the output layer will have a 'linear' activation function for one neuron.
2. Extract layers' output of the model (fitted or not) via iterating through model.layers
3. Using backend function K.function() to obtain calculated output for a given input data
4. Feed desired data to the above functions to obtain the output from appropriate activation function.

The code below is a demo:




from keras.layers import Dense, Activation
from keras.models import Sequential
import keras.backend as K
import numpy as np
import matplotlib.pyplot as plt



# 以下设置显示中文文方法根据 http://blog.csdn.net/rumswell/article/details/6544377
plt.rcParams['font.sans-serif'] = ['SimHei'] #指定默认字体
plt.rcParams['axes.unicode_minus'] = False #解决图像中中文符号显示为方块的问题

def NNmodel(activationFunc='linear'):
'''
定义一个神经网络模型。如果要定义不同的模型,可以直接修改该函数
'''
if (activationFunc=='softplus') | (activationFunc=='sigmoid'):
winit='lecun_uniform'
elif activationFunc=='hard_sigmoid':
winit='lecun_normal'
else:
winit='he_uniform'
model = Sequential()
model.add(Dense(1, input_shape=(1,), activation=activationFunc,
kernel_initializer=winit,
name='Hidden'))

model.add(Dense(1, activation='linear', name='Output'))
model.compile(loss='mse', optimizer='sgd')
return model

def VisualActivation(activationFunc='relu', plot=True):
x = (np.arange(100)-50)/10
y = np.log(x+x.max()+1)

model = NNmodel(activationFunc = activationFunc)

inX = model.input
outputs = [layer.output for layer in model.layers if layer.name=='Hidden']
functions = [K.function([inX], [out]) for out in outputs]

layer_outs = [func([x.reshape(-1, 1)]) for func in functions]
activationLayer = layer_outs[0][0]

activationDf = pd.DataFrame(activationLayer)
result=pd.concat([pd.DataFrame(x), activationDf], axis=1)
result.columns=['X', 'Activated']
result.set_index('X', inplace=True)
if plot:
result.plot(title=f)

return result


# Now we can visualize them (assuming default settings) :
actFuncs = ['linear', 'softmax', 'sigmoid', 'tanh', 'softsign', 'hard_sigmoid', 'softplus', 'selu', 'elu']

from keras.layers import LeakyReLU
figure = plt.figure()
for i, f in enumerate(actFuncs):
# 依次画图
figure.add_subplot(3, 3, i+1)
out=VisualActivation(activationFunc=f, plot=False)
plt.plot(out.index, out.Activated)
plt.title(u'激活函数:'+f)

This figure is the output from above code. As we can see, the geometric property of each activation function is well captured.

 Posted by at 4:44 下午
9月 072017
 
In many introductory to image recognition tasks, the famous MNIST data set is typically used. However, there are some issues with this data:

1. It is too easy. For example, a simple MLP model can achieve 99% accuracy, and a 2-layer CNN can achieve 99% accuracy.

2. It is over used. Literally every machine learning introductory article or image recognition task will use this data set as benchmark. But because it is so easy to get nearly perfect classification result, its usefulness is discounted and is not really useful for modern machine learning/AI tasks.

Therefore, there appears Fashion-MNIST dataset. This dataset is developed as a direct replacement for MNIST data in the sense that:

1. It is the same size and style: 28x28 grayscale image
2. Each image is associated with 1 out of 10 classes, which are:
       0:T-shirt/top,
       1:Trouser,
       2:Pullover,
       3:Dress,
       4:Coat,
       5:Sandal,
       6:Shirt,
       7:Sneaker,
       8:Bag,
       9:Ankle boot
3. 60000 training sample and 10000 testing sample Here is a snapshot of some samples:
Since its appearance, there have been multiple submissions to benchmark this data, and some of them are able to achieve 95%+ accuracy, most noticeably Residual network or separable CNN.
I am also trying to benchmark against this data, using keras. keras is a high level framework for building deep learning models, with selection of TensorFlow, Theano and CNTK for backend. It is easy to install and use. For my application, I used CNTK backend. You can refer to this article on its installation.

Here, I will benchmark two models. One is a MLP with layer structure of 256-512-100-10, and the other one is a VGG-like CNN. Code is available at my github: https://github.com/xieliaing/keras-practice/tree/master/fashion-mnist

The first model achieved accuracy of [0.89, 0.90] on testing data after 100 epochs, while the latter achieved accuracy of >0.94 on testing data after 45 epochs. First, read in the Fashion-MNIST data:

import numpy as np
import io, gzip, requests
train_image_url = "http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz"
train_label_url = "http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz"
test_image_url = "http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz"
test_label_url = "http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz"

def readRemoteGZipFile(url, isLabel=True):
response=requests.get(url, stream=True)
gzip_content = response.content
fObj = io.BytesIO(gzip_content)
content = gzip.GzipFile(fileobj=fObj).read()
if isLabel:
offset=8
else:
offset=16
result = np.frombuffer(content, dtype=np.uint8, offset=offset)
return(result)

train_labels = readRemoteGZipFile(train_label_url, isLabel=True)
train_images_raw = readRemoteGZipFile(train_image_url, isLabel=False)

test_labels = readRemoteGZipFile(test_label_url, isLabel=True)
test_images_raw = readRemoteGZipFile(test_image_url, isLabel=False)

train_images = train_images_raw.reshape(len(train_labels), 784)
test_images = test_images_raw.reshape(len(test_labels), 784)
Let's first visual it using tSNE. tSNE is said to be the most effective dimension reduction tool.This plot function is borrowed from sklearn example.

from sklearn import manifold
from time import time
import matplotlib.pyplot as plt
from matplotlib import offsetbox
plt.rcParams['figure.figsize']=(20, 10)
# Scale and visualize the embedding vectors
def plot_embedding(X, Image, Y, title=None):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)

plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
plt.text(X[i, 0], X[i, 1], str(Y[i]),
color=plt.cm.Set1(Y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})

if hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = np.array([[1., 1.]]) # just something big
for i in range(X.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(Image[i], cmap=plt.cm.gray_r),
X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
if title is not None:
plt.title(title)

tSNE is very computationally expensive, so for impatient people like me, I used 1000 samples for a quick run. If your PC is fast enough and have time, you can run tSNE against the full dataset.

sampleSize=1000
samples=np.random.choice(range(len(Y_train)), size=sampleSize)
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0)
t0 = time()
sample_images = train_images[samples]
sample_targets = train_labels[samples]
X_tsne = tsne.fit_transform(sample_images)
t1 = time()
plot_embedding(X_tsne, sample_images.reshape(sample_targets.shape[0], 28, 28), sample_targets,
"t-SNE embedding of the digits (time %.2fs)" %
(t1 - t0))
plt.show()
We see that several features, including mass size, split on bottom and semetricity, etc, separate the categories. Deep learning excels here because you don't have to manually engineering the features but let the algorithm extracts those.

In order to build your own networks, we first import some libraries

from keras.models import Sequential
from keras.layers.convolutional import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers import Activation
We also do standard data preprocessing:

X_train = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
X_test = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')

X_train /= 255
X_test /= 255

X_train -= 0.5
X_test -= 0.5

X_train *= 2.
X_test *= 2.

Y_train = train_labels
Y_test = test_labels
Y_train2 = keras.utils.to_categorical(Y_train).astype('float32')
Y_test2 = keras.utils.to_categorical(Y_test).astype('float32')
Here is the simple MLP implemented in keras:

mlp = Sequential()
mlp.add(Dense(256, input_shape=(784,)))
mlp.add(LeakyReLU())
mlp.add(Dropout(0.4))
mlp.add(Dense(512))
mlp.add(LeakyReLU())
mlp.add(Dropout(0.4))
mlp.add(Dense(100))
mlp.add(LeakyReLU())
mlp.add(Dropout(0.5))
mlp.add(Dense(10, activation='softmax'))
mlp.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
mlp.summary()

This model achieved almost 90% accuracy on test dataset at about 100 epochs. Now, let's build a VGG-like CNN model. We use an architecture that is similar to VGG but still very different. Because the figure data is small, if we use original VGG architecture, it is very likely to overfit and won't perform very well in testing data which is observed in publically submitted benchmarks listed above. To build such a model in keras is very natural and easy:

num_classes = len(set(Y_train))
model3=Sequential()
model3.add(Conv2D(filters=32, kernel_size=(3, 3), padding="same",
input_shape=X_train.shape[1:], activation='relu'))
model3.add(Conv2D(filters=64, kernel_size=(3, 3), padding="same", activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Dropout(0.5))
model3.add(Conv2D(filters=128, kernel_size=(3, 3), padding="same", activation='relu'))
model3.add(Conv2D(filters=256, kernel_size=(3, 3), padding="valid", activation='relu'))
model3.add(MaxPooling2D(pool_size=(3, 3)))
model3.add(Dropout(0.5))
model3.add(Flatten())
model3.add(Dense(256))
model3.add(LeakyReLU())
model3.add(Dropout(0.5))
model3.add(Dense(256))
model3.add(LeakyReLU())
#model2.add(Dropout(0.5))
model3.add(Dense(num_classes, activation='softmax'))
model3.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model3.summary()
This model has 1.5million parameters. We can call 'fit' method to train the model:

model3_fit=model3.fit(X_train, Y_train2, validation_data = (X_test, Y_test2), epochs=50, verbose=1, batch_size=500)
After 40 epochs, this model archieves accuracy of 0.94 on testing data.Obviously, there is also overfitting problem for this model. We will address this issue later.

 Posted by at 8:48 上午
5月 172017
 

Deep learning made the headlines when the UK’s AlphaGo team beat Lee Sedol, holder of 18 international titles, in the Go board game. Go is more complex than other games, such as Chess, where machines have previously crushed famous players. The number of potential moves explodes exponentially so it wasn’t [...]

Deep learning: What’s changed? was published on SAS Voices by Colin Gray

1月 092017
 

I've long been fascinated by both science and the natural world around us, inspired by the amazing Sir David Attenborough with his ever-engaging documentaries and boundless enthusiasm for nature, and also by the late, great Carl Sagan and his ground-breaking documentary series, COSMOS. The relationships between the creatures, plants and […]

Intelligent ecosystems and the intelligence of things was published on SAS Voices.

6月 062016
 

There's no doubt that artificial intelligence (AI) is here and is rapidly gaining the attention of brands large and small. As I talk to customers and prospects, they are interested in understanding how AI and its subcomponents (cognitive computing, machine learning, or even deep learning) are being woven into various departments (marketing, sales, service and support) at organizations across industries.

Here are some examples of cognitive computing and machine learning today at organizations, and how these capabilities will enhance customer experience in the future.

I think it's important to start with a few foundational facts:

  • AI as a practice is not new – John McCarthy and others started their research into this area back in the 1950s.
  • AI and its subcomponents are rooted in predictive analytics (neural networks, data mining, natural language processing, etc., all have their beginnings here).
  • Automation and the use of supervised and unsupervised algorithms are crucial to machine learning and cognitive computing use cases.
  • Deep learning uses the concept of teaching and training to accomplish more advanced automation tasks. It’s important to note that deep learning is not as prevalent from a customer experience perspective as machine learning and cognitive computing. Let's take a look at what AI means for brands as the customer experience becomes the primary differentiator for marketing organizations.

algorithms

A cognitive computing use case

Cognitive computing enables software to engaging in human-like interactions. Cognitive computing uses analytical processes (voice to text, natural language processing and text and sentiment analysis) to determine answers to questions.

For example, a SAS customer uses automation to provide a quicker response to service requests that come in to the brand's contact center. It can send an automated reply to service inquires, direct the customer to appropriate departments, and send customer responses back to the channel – all using SAS solutions. These capabilities reduces the number of replies that require human intervention and improves service response times. This same use case can be applied across industries such as retail, telecom, financial services and utilities. The end result? A happier customer and an improved customer experience.

cognitive computing

Analytics: the core of machine learning

Machine learning uses software that can scan data to identify patterns and predict future results with minimal human intervention.

Analytics play an important role. Model retraining, the use of historical data and environmental conditions all serve as inputs into the supervised and unsupervised algorithms that machine learning uses. For example, some of our large telecom and financial services providers use data, customer journey maps and past patterns to be able to serve timely and relevant offers during customer interactions.

Many of our customers can do in less than one second, and are providing response and replies that are relevant and individualized. Another great example of machine learning is the development work that SAS is doing currently with regard to its marketing software.

Our customer intelligence solutions use embedded machine learning processes to make setting up activities and completing tasks in the software easier for analysts and marketers alike. For instance, the software will automatically choose the optimal customer segment and creative combinations for a campaign. It will also recommend the best time to follow up with a customer or segment and on the customer’s preferred devices. Machine learning also gives marketers the ability to understand how to use and modify digital assets for the most reach and optimal conversions.

The newest addition to artificial intelligence

Deep learning, a newer concept that relies on deep neural networks – is certainly something that is coming to the marketing and service realms. Many companies have started looking at how we teach and train software to accomplish complex activities – drive cars, play chess, make art (the list goes on). As for marketing, I believe we will see deep learning being used to run marketing programs, initiate customer service interactions or map customer journeys in detail.

These are just a few examples of how we are seeing AI improve the customer experience. You and I, as digitally empowered consumers, will certainly benefit from man and machine working together to automate the interactions that we have with brands on a daily basis. I urge you to keep an eye out for how brands big and small are automating the interactions they have with you – I think you will be pleasantly surprised with the outcome.

tags: artificial intelligence, cognitive computing, customer analytics, deep learning, marketing automation, marketing software, predictive analytics, Predictive Marketing, SAS Customer Intelligence 360

How artificial intelligence will enhance customer experiences was published on Customer Intelligence.

6月 062016
 

There's no doubt that artificial intelligence (AI) is here and is rapidly gaining the attention of brands large and small. As I talk to customers and prospects, they are interested in understanding how AI and its subcomponents (cognitive computing, machine learning, or even deep learning) are being woven into various departments (marketing, sales, service and support) at organizations across industries.

Here are some examples of cognitive computing and machine learning today at organizations, and how these capabilities will enhance customer experience in the future.

I think it's important to start with a few foundational facts:

  • AI as a practice is not new – John McCarthy and others started their research into this area back in the 1950s.
  • AI and its subcomponents are rooted in predictive analytics (neural networks, data mining, natural language processing, etc., all have their beginnings here).
  • Automation and the use of supervised and unsupervised algorithms are crucial to machine learning and cognitive computing use cases.
  • Deep learning uses the concept of teaching and training to accomplish more advanced automation tasks. It’s important to note that deep learning is not as prevalent from a customer experience perspective as machine learning and cognitive computing. Let's take a look at what AI means for brands as the customer experience becomes the primary differentiator for marketing organizations.

algorithms

A cognitive computing use case

Cognitive computing enables software to engaging in human-like interactions. Cognitive computing uses analytical processes (voice to text, natural language processing and text and sentiment analysis) to determine answers to questions.

For example, a SAS customer uses automation to provide a quicker response to service requests that come in to the brand's contact center. It can send an automated reply to service inquires, direct the customer to appropriate departments, and send customer responses back to the channel – all using SAS solutions. These capabilities reduces the number of replies that require human intervention and improves service response times. This same use case can be applied across industries such as retail, telecom, financial services and utilities. The end result? A happier customer and an improved customer experience.

cognitive computing

Analytics: the core of machine learning

Machine learning uses software that can scan data to identify patterns and predict future results with minimal human intervention.

Analytics play an important role. Model retraining, the use of historical data and environmental conditions all serve as inputs into the supervised and unsupervised algorithms that machine learning uses. For example, some of our large telecom and financial services providers use data, customer journey maps and past patterns to be able to serve timely and relevant offers during customer interactions.

Many of our customers can do in less than one second, and are providing response and replies that are relevant and individualized. Another great example of machine learning is the development work that SAS is doing currently with regard to its marketing software.

Our customer intelligence solutions use embedded machine learning processes to make setting up activities and completing tasks in the software easier for analysts and marketers alike. For instance, the software will automatically choose the optimal customer segment and creative combinations for a campaign. It will also recommend the best time to follow up with a customer or segment and on the customer’s preferred devices. Machine learning also gives marketers the ability to understand how to use and modify digital assets for the most reach and optimal conversions.

The newest addition to artificial intelligence

Deep learning, a newer concept that relies on deep neural networks – is certainly something that is coming to the marketing and service realms. Many companies have started looking at how we teach and train software to accomplish complex activities – drive cars, play chess, make art (the list goes on). As for marketing, I believe we will see deep learning being used to run marketing programs, initiate customer service interactions or map customer journeys in detail.

These are just a few examples of how we are seeing AI improve the customer experience. You and I, as digitally empowered consumers, will certainly benefit from man and machine working together to automate the interactions that we have with brands on a daily basis. I urge you to keep an eye out for how brands big and small are automating the interactions they have with you – I think you will be pleasantly surprised with the outcome.

tags: artificial intelligence, cognitive computing, customer analytics, deep learning, marketing automation, marketing software, predictive analytics, Predictive Marketing, SAS Customer Intelligence 360

How artificial intelligence will enhance customer experiences was published on Customer Intelligence.

6月 062016
 

There's no doubt that artificial intelligence (AI) is here and is rapidly gaining the attention of brands large and small. As I talk to customers and prospects, they are interested in understanding how AI and its subcomponents (cognitive computing, machine learning, or even deep learning) are being woven into various departments (marketing, sales, service and support) at organizations across industries.

Here are some examples of cognitive computing and machine learning today at organizations, and how these capabilities will enhance customer experience in the future.

I think it's important to start with a few foundational facts:

  • AI as a practice is not new – John McCarthy and others started their research into this area back in the 1950s.
  • AI and its subcomponents are rooted in predictive analytics (neural networks, data mining, natural language processing, etc., all have their beginnings here).
  • Automation and the use of supervised and unsupervised algorithms are crucial to machine learning and cognitive computing use cases.
  • Deep learning uses the concept of teaching and training to accomplish more advanced automation tasks. It’s important to note that deep learning is not as prevalent from a customer experience perspective as machine learning and cognitive computing. Let's take a look at what AI means for brands as the customer experience becomes the primary differentiator for marketing organizations.

algorithms

A cognitive computing use case

Cognitive computing enables software to engaging in human-like interactions. Cognitive computing uses analytical processes (voice to text, natural language processing and text and sentiment analysis) to determine answers to questions.

For example, a SAS customer uses automation to provide a quicker response to service requests that come in to the brand's contact center. It can send an automated reply to service inquires, direct the customer to appropriate departments, and send customer responses back to the channel – all using SAS solutions. These capabilities reduces the number of replies that require human intervention and improves service response times. This same use case can be applied across industries such as retail, telecom, financial services and utilities. The end result? A happier customer and an improved customer experience.

cognitive computing

Analytics: the core of machine learning

Machine learning uses software that can scan data to identify patterns and predict future results with minimal human intervention.

Analytics play an important role. Model retraining, the use of historical data and environmental conditions all serve as inputs into the supervised and unsupervised algorithms that machine learning uses. For example, some of our large telecom and financial services providers use data, customer journey maps and past patterns to be able to serve timely and relevant offers during customer interactions.

Many of our customers can do in less than one second, and are providing response and replies that are relevant and individualized. Another great example of machine learning is the development work that SAS is doing currently with regard to its marketing software.

Our customer intelligence solutions use embedded machine learning processes to make setting up activities and completing tasks in the software easier for analysts and marketers alike. For instance, the software will automatically choose the optimal customer segment and creative combinations for a campaign. It will also recommend the best time to follow up with a customer or segment and on the customer’s preferred devices. Machine learning also gives marketers the ability to understand how to use and modify digital assets for the most reach and optimal conversions.

The newest addition to artificial intelligence

Deep learning, a newer concept that relies on deep neural networks – is certainly something that is coming to the marketing and service realms. Many companies have started looking at how we teach and train software to accomplish complex activities – drive cars, play chess, make art (the list goes on). As for marketing, I believe we will see deep learning being used to run marketing programs, initiate customer service interactions or map customer journeys in detail.

These are just a few examples of how we are seeing AI improve the customer experience. You and I, as digitally empowered consumers, will certainly benefit from man and machine working together to automate the interactions that we have with brands on a daily basis. I urge you to keep an eye out for how brands big and small are automating the interactions they have with you – I think you will be pleasantly surprised with the outcome.

tags: artificial intelligence, cognitive computing, customer analytics, deep learning, marketing automation, marketing software, predictive analytics, Predictive Marketing, SAS Customer Intelligence 360

How artificial intelligence will enhance customer experiences was published on Customer Intelligence.

4月 202016
 

If I were to show you a picture of a house, you would know it’s a house without even stopping to think about it. Because you have seen hundreds of different types of houses, your brain has come to recognize the features – a roof, a door, windows, a front […]

Deep learning methods and applications was published on SAS Voices.