7 Quotes That Will Shape the Way You Think About Artificial Intelligence

Seminal Thinkers in the Field Interviewed on New Podcast Series, “Voices in AI”.

argodesign
10 min readOct 4, 2017

Artificial Intelligence is transforming the future of technology. And yet, the topic is so vast and complex, scholars in the field struggle to agree on universal definitions of critical concepts like ‘consciousness’ and ‘intelligence’ — even what ‘AI’ really means. Voices in AI, a new podcast series that brings together the world’s foremost experts on the topic, explores some of the most compelling questions on the topic and surfaces both practical and philosophical implications of this burgeoning form of computing. Here, we review highlights from volume one, with seven interview excerpts from the top minds in AI.

YOSHUA BENGIO, author and professor at the University of Montreal

“I do really believe that creativity is computational…It is something we understand the principles behind. So, it’s only a matter of having…neural nets or models that are smarter. That understand the world better. So, I don’t think that creativity is something — I don’t think that any of the human faculties is something — inherently inaccessible to computers. I would say that some aspects of humanity are less accessible and creativity of the kind that we appreciate is probably one that is going to be something that’s going to take more time to reach. But maybe even more difficult for computers, but also quite important, will be to understand not just human emotions, but also something a little bit more abstract, which is our sense of what’s right and what’s wrong. And this is actually an important question because when we’re going to put these computers in the world, in products, and they’re going to take decisions, well for some very simple things we know how to define the task, but sometimes the computer is going to be having to make a compromise between doing the task that it wants to do and maybe doing bad things in the world. And so, it needs to know what is bad. What is morally wrong? What is socially acceptable? And, I think we’ll manage to train computers to understand that, but it’s going to take a while as well.”

Full podcast episode with Voices in AI here.

OREN ETZIONI, CEO of Allen Institute for Artificial Intelligence

“I’m not a big fan of the doomsday scenarios about AI. I tell people we should not confuse science with science fiction. But another reason why we shouldn’t concern ourselves with Skynet and doomsday scenarios is because we have a lot more realistic and pressing problems to worry about. And that, for example, is AIs impact on jobs. That’s a very real concern. We’ll see it in the transportation sector, I predict, particularly soon. Where truck drivers and Uber drivers and so on are going to be gradually squeezed out of the market, and that’s a very significant number of workers. And it’s a challenge, of course, to help these people to retrain them, to help them find other jobs in an increasingly digital economy…The reason that we have, you know, phones and cars and washing machines and all these things that make our lives better and that are broadly shared through society and modern medicine, and so on, is because of technological advances. So I don’t think of these technological advances, including AI advances, as either a) negative; or b) avoidable…I think that it’s very, very difficult, if not impossible to stop broad-based technology change. Narrow technologies that are particularly, you know, terrible, like landmines or biological weapons, we’ve been able to stop. But I think AI isn’t stoppable because it’s much broader, and it’s not something that it should be stopped, it’s not like that…We survived those things and we emerged thriving, but the disruption over significant periods of time and for millions of people were very, very difficult. So right as we went from a society that’s…ninety-something percent agricultural to one where there’s only two percent workers in agriculture — people suffered and people were unemployed. And so, I do think that we need to have the programs in place to help people with these transitions.”

Full podcast episode with Voices in AI here.

MARK ROLSTON, founder and chief creative at argodesign

“Today, we’re experiencing digital systems that are, in increasingly-sophisticated ways, thinking for us. They’re helping us get home. They’re helping advise us on who to call right now and what to do right now, and where that thing is I forgot where it was…I have this sort of external mind now. Just like, historically, we had this idea that the voice in our mind was not really our own (the Bicameral mind). These digital systems that are extensions of us —they have deep properties that we helped to imbue them with about us — we think of them as very external forces right now. They are Facebook. They are Siri. Yet, increasingly, we depend on them in the same way that we depend on our own internal conscience or our own internal voices. Eventually, I think, much like we came to have a unified mind, the digital systems that we depend on — largely, we’re talking about these intelligent systems, these artificial intelligence assistants that are emerging around us for everything — will become one with us. And I don’t mean that in some sci-fi way. I mean, in the sense that when we think about our identity — ‘Who am I? How smart am I? What am I best at? What do I know the most of? Am I funny? Am I clever? Am I witty?’ — anything like that will be inseparably connected to those digital systems, that we tie to us, that we use on a frequent basis. We’ll be unable to judge ourselves in any sort of immutable way of flesh and blood; it will be as this newly-joined cyber-creature. To me, that again spells out more and more that the idea of our own cognition — our own idea of what does it mean to be intelligent as a human, sort of natural intelligence — isn’t that magically different. It is entwined with not only the digital systems we subscribe to, but these digital systems are drawing on the same underlying basis of decision-making and context-forming…they’re just one-quintillionth the level of sophistication.”

Full podcast episode with Voices in AI here.

JEFF DEAN, Google Senior Fellow and Google Brain lead

“One of the things that I think is really important today in the field of machine learning research, that we’ll need to overcome, is…right now, when we want to build a machine learning system for a particular task we tend to have a human machine learning expert involved in that. So, we have some data, we have some computation capability, and then we have a human machine learning expert sit down and decide: Okay, we want to solve this problem, this is the way we’re going to go about it roughly. And then we have the system that can learn from observations that are provided to it, how to accomplish that task. That’s sort of what generally works, and that’s driving a huge number of really interesting things in the world today. And you know this is why computer vision has made such great strides in the last five years. This is why speech recognition works much better. This is why machine translation now works much, much better than it did a year or two ago. So that’s hugely important. But the problem with that is you’re building these narrowly defined systems that can do one thing and do it extremely well, or do a handful of things. And what we really want is a system that can do a hundred thousand things, and then when the hundred thousand-and-first thing comes along that it’s never seen before, we want it to learn from its experience to be able to apply the experience it’s gotten in solving the first hundred thousand things to be able to quickly learn how to do thing hundred thousand-and-one.”

Full podcast episode with Voices in AI here.

DAPHNE KOLLER, co-founder of Coursera and professor at Stanford University

“That was the day of logical AI, and I think people thought that one could reason about the world using the rules of logic, where you have a whole bunch of facts that you know — dogs are mammals; Fido is a dog, therefore Fido is a mammal — and that all you would need is to write down those facts, and the laws of logic would then take care of the rest. I think we now understand that that is just not the case, and that there is a lot of complexity both on the fact side, and then how you synthesize those facts to create broader conclusions, and how do you deal with the noise, and so on and so forth. So I don’t think anyone thinks that it’s as simple as that. As to whether there is a single, general architecture that you can embed all of intelligence in, I think some of the people who believe that deep neural networks are the solution to the future of AI would advocate that point of view. I’m agnostic about that. I personally think that that’s probably not going to be quite there, and you’re probably going to need at least one or two other big ideas, and then a heck of a lot of learning to fine-tune parts of the model to very different use models — in the same way that our visual system is quite different from our common sense reasoning system...I think neural nets are very powerful technology, and they certainly help address, to a certain extent, a very large bottleneck, which is how do you construct a meaningful set of features in domains where it’s really hard for people to extract those, and solve problems really well. I think their development, especially over the last few years, when combined with large data, and the power of really high-end computing, has been transformative to the field. Do I think they are the universal architecture? Not as of now.”

Full podcast episode with Voices in AI here.

NICK BOSTROM, best selling author and philosopher at Oxford University

“By the time we figure out how to make machines truly smart, we will need to have figured out ways to align them with human goals and intentions so that we can get them to do what we want. So right now you can define an objective function. In many cases it’s quite easy. If you want to train some agent to play chess, you can define what good performance is. You get a 1 if you win a game and a 0 if you lose a game, and 1/2 a point perhaps if you make a draw. So that’s an objective we can define. But in the real world, all things considered, we humans care about things like happiness and justice and beauty and pleasure. None of those things are very easy to sort of sit down and type out a definition in C++ or Python. So you’d need to figure out a way to get potentially superintelligent agents to nevertheless service an extension of the human will, so that they would realize what your intentions are, and then be able to execute that faithfully. That’s a big technical research challenge that there are now groups bringing up and pursuing. And assuming that we can solve that technical control problem then, we get the luxury of confronting these wider policy issues. Like who should decide what this AI is used for, what social purposes should it be used for, how do we want this future world with superintelligence to look like.”

Full podcast episode with Voices in AI here.

JARED FICKLIN, partner and chief creative technologist at argodesign

“It’s personification that often is the dangerous thing. Think of people who dance with poisonous snakes. Sometimes it’s done as a dare, but sometimes it’s done because there’s a personification put on the animal that gives it greater importance than what it actually is, and that can be quite dangerous. I think we risk that [with AI], too, just putting too much personification, human tendencies, on the technology. For instance, there is actually a group of people who are advocating rights for industrial robots today, as if they are human, when they are not. They are very much just industrial machines. That kind of psyche is what I think some people are trying to inoculate now, because it walks us down this path where you’re thinking you can’t turn that thing off, because it’s given this personification of sentience before it has actually achieved it. It’s been given this notion of rights before it actually has them. And the judgment of, even if it’s dangerous and we should hit the kill switch, there are going to be people reacting against that, saying, ‘You can’t kill this thing off’ — even though it is quite dangerous to the species. That, to me, is a very interesting thing because a lot of people are looking at it as if, if it becomes intelligent, it will be a human intelligence. I think that’s what a lot of the big thinkers think about, too. They think this thing is not going to be human intelligence, at which point you have to make a species-level judgment on its rights, and its ability to be sentient and put out there.”

Full podcast episode with Voices in AI here.

Voices in AI Volume I is brought to you by Gigaom and argodesign.

--

--

argodesign

We are a product design firm. We love design – for the technology, for the simple joy of craft, and ultimately for the experiences we create.