Frank Chen, A16Z
Frank Chen is a partner at the venture capital firm Andreessen Horowitz. He runs the research and deal team at the firm, which systematically identifies and evaluates investment opportunities, and builds knowledge about technology, people and trends.
Prior to Andreessen Horowitz, Frank was the vice president of Strategy for HP Software, where he helped the company understand and act on changes resulting from the rapid enterprise adoption of virtualization technologies across servers, network and storage. Frank joined HP Software through its acquisition of Opsware, where he was the vice president of Products and User Experience for a broad set of Opsware’s data center automation products.
Frank holds a B.S. in Symbolic Systems from Stanford University, where he graduated with distinction and was elected to Phi Beta Kappa. Capgemini’s Digital Transformation Institute spoke to Frank to understand how he sees AI as a partner at one of Silicon Valley’s most respected VC firms working with some of the hottest startups.
AI is here, and it will impact everyone
AI has been around for a while. Why is it different now?
AI is now inside a bunch of mainstream applications that everybody uses from Siri to Pinterest. This is a relatively recent phenomenon. Probably a quarter of applications have AI today and it’s growing fast against maybe less than one percent 20 years ago. The big change from a technology point of view has been the ‘deep learning’ machine learning algorithms in conjunction with the availability of lots of data and the ability to crunch that data on many different computers. That is what has made the difference since when we started in 1956.
Where will AI be most useful?
Asking where AI will be most useful is like asking which sectors were going to benefit when database technologies emerged. It turned out the answer was every single one of them. It’s hard to imagine a piece of software where you don’t have a database behind it. They were good for United Airlines, good for Starbucks, and good for Exxon. AI is exactly like that. Companies need to be thinking about what to build, how to price, how to reach customers, and how to support them. All of that is going to be turbo charged by AI.
What are some of the ways in which AI can help organizations?
The way to think about AI is it can do lots of things that previously you would give to humans to do. Imagine that on every customer support call you could have someone listen in and they could tell you what product or services the customer is interested in or whether the customer is getting angry. You could hire an additional rep to listen in on every call, but it wouldn’t be cost effective and nobody does it. They end up recording all their calls and then they don’t listen to those calls. So, AI can listen to those calls and answer all of those questions for you: are they getting angry, what products should we recommend to them at this point, are they about to turn and if they’re about to turn what can we offer them, so they don’t? Think of it as having an army of interns that can listen in to calls, read documents, look at pictures, or make predictions based on past history. That’s what AI is going to be able to do.
Are you seeing a growing number of startups offering AI solutions?
Yes, basically all the startups that we find these days are AI startups. When we first started as a firm in 2009 nobody called themselves AI-powered. And flash forward to today, 80% of the startups we see call themselves AI-powered. So, it just shows the sea change that’s happening. AI is rapidly becoming the must-have technology – like the iPhone app seven years ago. When we see a startup that doesn’t have machine learning and AI, the first question we ask is ‘so who’s doing it with AI?’
The talent gap is only temporary
When the top tech firms are scooping up all the hot AI talent, what should traditional organizations do?
When I took the CS221 ‘Intro to AI’ at Stanford in the late Stone Age, there were 50 people in the course. This semester there are three ‘intro to AI’ classes. Each one of them has 1000 students. For context, Stanford has something like 6000 undergrads total, so one-sixth of the entire undergraduate population is taking one of the three ‘intro to AI’ classes. And so, the ecosystem is catching up fast and we won’t have this sort of temporary imbalance of talent that we do now.
So, do you believe a combination of re-skilling/ up-skilling can help when it comes to developing AI talent?
Yes, that’s exactly right. Every university is adding this set of techniques to their computer science curriculum and there are dozens of boot camps. It’s also important to bear in mind that large traditional organizations don’t necessarily need the multi-million-dollar top AI researcher. They don’t need to invent a brand-new AI algorithm or technique, they can just take the ones that are being invented and apply them into their business. So, the great news is that those research engineers that go into Google, Apple, Facebook, Amazon and Apple and so on, they are open sourcing what they’re doing.
What I don’t want is an industry executive to read a news article and conclude that since they can’t afford a $10 million artificial intelligence researcher, they are just not going to do anything. That’s the absolute wrong conclusion to draw. The right conclusion is, ‘hey, those top multi-million-dollar researchers are open sourcing their stuff – that’s fantastic. I can take an image of it without paying a million dollars.
Sending motivated, curious people to get the Udacity Artificial Intelligence nanodegree is a 6 month, $1,600 investment. It’s not 2 years and $100,000.
AI-native organizations treat data very seriously
So, what are some skills that traditional organization should focus on developing?
Andrew Ng, who is one of the most well-known researchers in this space, has this really good analogy. In the early days of the Internet, even your neighborhood shopping mall had a website. But just because they had a website, that didn’t make them an internet company. The Internet retailer was Amazon. And becoming an Internet retailer required Amazon to master a lot of things. They mastered continuous development and deployment. They changed their website 100 times a day as opposed to the mall which changed it like once a year. They mastered A/B testing. So, they mastered all these things and that’s what made them an Internet native.
The same thing is happening with AI. Just because you use AI techniques doesn’t make you an AI native. As an AI native you’re going to have to master a whole different set of skills. We don’t know exactly what set of skills these are going to be yet, but there are a few early signs. An AI-native company is going to treat data very seriously – harvesting the data, feeding the data into algorithms, and labeling the data so that the algorithms can make accurate predictions.
Make AI a daily habit for your organization
What are some areas where partnership with AI-driven startups makes sense for large organizations?
Building AI tools requires consistency. You have to make AI a daily habit, because if you don’t make it a daily habit your competitor will make it a daily habit. There are startups that offer tools for people who are building AI applications.
If you have a model that makes predictions, SigOpt, for example, can make that model better by automatically tuning it. Every model has a dozen or more knobs that you can turn and, depending on where the knobs are set, the model produces different predictions. There’s an optimal setting for each knob. So, you can either pay a data scientist to basically trial the settings, or you can just use SigOpt, which will automatically put the knobs in the right place to make the best possible recommendations. So that’s a tool that organizations can use to get the expertise in-house.
What are some low-hanging use cases for AI?
The low-hanging fruit is basically anywhere where you’re making a recommendation, anywhere where you need to understand what people are saying, or anywhere where you need to recognize what’s happening inside a picture. So, for example, having a chat interface to your brand so that people can chat with you on an automated basis. Low-hanging fruit would also be understanding text, pictures, a video, and making recommendations.
What about the more complex use cases of AI?
We have a lot of biology investments that are basically doing such as taking blood samples inside your blood sample. In other words, free-floating DNA that’s not bound to tissue and is just floating around in your blood stream. We can take that DNA and use it to predict whether you are going to get a specific type of cancer. So, compare that to the global standard today, which is you need a tissue biopsy. There’s a company called Freenome that can do that without any tissue. For cancer, the most important thing that you can do is early diagnosis as survival rates are 90% when you find it early, but 10% when you find it late. So, the single most important thing you can do with cancer therapy right now is not to invent something for the 10% case. In general, most AIs are now likely to be as effective or more effective than the very best human diagnostics.
AI eats jobs, but fears of an AI takeover are overblown
AI is almost always spoken of in context of job losses. What are your thoughts on this?
Every category of automation eliminates jobs and creates jobs and this has been true since the very dawn of automation. When we invented the looms, a lot of weavers lost their jobs. Today, less than five percent of the population is involved with farming as compared to 80% at the turn of the century. My take on it is automation has displaced people, but it’s always created new opportunities and AI is just one in a long line of automation technologies.
We didn’t know we wanted iPhones until Steve Jobs showed them to us. Our capacity to want new things and new services is basically infinite. Therefore, if it’s infinite, then somebody is going to emerge and service that need and create new jobs around it. I don’t know why AI would be any different than any other automation technology. Some of the jobs AI will replace are also not work that anyone wants to do. For example, AI is going to eventually make it possible to automate strawberry harvesting. If you’ve been a strawberry harvester, you know that it’s a miserable job. In general, that’s good, as long as we can find the strawberry harvester another job, which I’m pretty confident that we’ll be able to do, because we always want new jobs and new services.
Premature to worry about super AI
Elon Musk worries about World War 3 when it comes to AI. Machine learning researchers disagree. What is your take on the debate?
To frame that answer, let me describe maturity levels for AI. Everything that I’ve been talking about is what the research community calls narrow AI, which is very specific. Pinterest built something that can look inside a pin and figure out what’s in it and what’s similar to it. Those algorithms don’t help Lyft to figure out the best route to take you from place A to point B. And then those Lyft algorithms don’t help Freenome figure out if you have cancer. They’re very task specific and that’s why they’re called narrow, they don’t generalize. Even in these narrow fields there are lots of things that people can do that algorithms can’t do.
The second sort of AI would be general AI. In other words, you and I can learn new things, like you can learn to ice skate, you can learn to do double-entry book-keeping, and you can learn to do machine learning and a lot more. AIs can’t do that yet and there’s no consensus in the research community about how you would build a machine that has this remarkable property that our brains have, which is the ability to do new things in new domains very quickly. So, we’re not even on a path to really do that.
And then the third stage is what Elon Musk is most worried about, which is super AI and ‘singularity’, where AIs are better than humans at everything. At that point, there’s no looking back, because computers are faster, have more memory, and are more repeatable, therefore making humans irrelevant. That seems very far away. As Andrew Ng says, it’s like worrying about overpopulation on Mars. One day we might have to worry about that, but nobody is on Mars yet, so let’s not worry about that now. Even in narrow AI there’s a long way to go, so worrying about super AI seems a little premature.