Gentle Intro to Machine Learning for Product Managers

Understand how machine learning works under the hood without getting overly technical.

Pranav Ambwani
Towards Data Science

--

Imagine a Rock Paper Scissors Game…

And, you want to write code so that as you move your hand as a rock, paper, or scissors, the computer would recognize it and play against you.

Think about writing code to implement such a game. You’d have to pull images from the camera, look at the content of those images, etc.

That would end up being a lot of code for you to write. Unsurprisingly, it would ultimately become extremely complicated and infeasible to program the game.

Traditional Programming

In traditional programming — something that has been our bread and butter for many years — we think about expressing rules in a programming language.

In traditional programming, the rules defined by programmers generally act on data to give us answers.

For example, in Rock Paper Scissors, the data would be an image and the rules would be all the if-then statements looking at the pixels in that image to try and determine the object.

The issue with traditional programming is that it will cover all the predictable bases. But, what if we don’t know that a certain rule exists?

Machine Learning

Machine learning turns the tables on traditional programming. It comes up with predictions by setting certain parameters to produce the desired output.

This is a complicated way of saying that the model learns from its mistakes, just like us.

Parameters are those values inside functions that are going to be set and changed as the model tries to learn how to match those inputs to those outputs.

Think of these functions as moments in your life and the parameters as your learnings from those moments. You hope to never repeat those mistakes, just like you’d hope your model to.

In machine learning, instead of writing and expressing the rules in code, we provide A LOT of answers, label those answers, and then have a machine infer the rules that maps one to the other.

For example, in our Rock Paper Scissors game, we can state what the pixels are for a rock. That is, we tell the computer, “Hey, this is what a rock looks like.” We can do this several thousand times to get diverse hands, skin tones, etc.

If a machine can then figure out the patterns between these, we now have machine learning; we have a computer that’s determined these things for us!

How Can I Run an Application That Looks Like This?

In the training phase shown above, we’ve trained what’s going to be a model on this; that model is essentially a neural network.

At runtime, we’ll pass in our data, and our model is going to give out something called predictions.

For example, say that you’ve trained your model on a lot of rocks, papers, and scissors.

When you hold your fist up to a webcam, your model will grab the data of your fist and give back what we call a prediction.

Our prediction can be something along the lines that there’s an 80% chance that this is a rock and a 10% chance that this is either a paper or scissors.

A lot of the terminology in machine learning is a little bit different than that of traditional programming. We’re calling it training rather than coding and compiling; we’re calling it inference and we’re getting predictions out of inference.

You can easily implement this model using TensorFlow. Below is a very simple piece of code for creating such a neural network:

model  = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(150, 150, 3)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')model.fit(..., epochs=100)

Diving Deeper

Below are some great resources that I’ve been collecting for the last few months. I hope that they can help guide you in the right direction.

MOOCs

Tutorials

Talks and Podcasts

  • AI And Deep Learning. From types of machine intelligence to a tour of algorithms, a16z Deal and Research team head, Frank Chen walks us through the basics (and beyond) of AI and deep learning in this slide presentation.
  • a16z Podcast: The Product Edge in Machine Learning Startups. A lot of machine learning startups initially feel a bit of “impostor syndrome” around competing with big companies, because (the argument goes), those companies have all the data; surely we can’t beat that! Yet there are many ways startups can, and do, successfully compete with big companies. You can achieve great results in a lot of areas even with a relatively small data set, argue the guests on this podcast if you build the right product on top of it.
  • When Humanity Meets AI. Andreessen Horowitz Distinguished Visiting Professor of Computer Science is Fei-Fei Li [who publishes under Li Fei-Fei], associate professor at Stanford University, argues we need to inject a stronger humanistic thinking element to design and develop algorithms and A.I. that can cohabitate with people and in social (including crowded) spaces.
  • “Large-Scale Deep Learning for Intelligent Computer Systems”, Google Tech Talk with Jeff Dean at Campus Seoul, March 2016.

--

--