[voices_in_ai_byline]

I have no doubt that we’re planning to eventually, as a human race have the ability to figure out how to build intelligent systems that are just as intelligent as we are. I think in a number of the things, we have a tendency to think about how we’re distinct  from other kinds of intelligences on Earth. We do things such as… there was a time period where we wanted to differentiate ourselves from the animals and we believed of reason, the capability to reason and do things such as mathematics and abstract logic was exactly what was uniquely human about us.
I always prefer to start with my Rorschach question which is,”What is intelligence and is Artificial Intelligence artificial?” And you’re a neuroscientist and a psychologist and a biologist, so do you think of intelligence?
Episode 84 of Voices at AI features host Byron Reese and David Cox talk about classifications of AI, and how the research has been evolving and growing

Additionally, there are some examples in which you choose a captioning system–a system generate a caption and can take an image. It’s possible to create wonderful captions in scenarios where the images seem like those it was trained on, but you reveal it anything just slightly weird like an airplane that’s about to crash or even a family fleeing their home on a flooding shore and it’ll create things such as a plane is on the tarmac at an airport or a household is standing on a shore. It ’ s like it was able to do something as it learned correlations involving the inputs and the sparks that we requested it for like they missed the point, but it didn ’ t have a deep comprehension. And I think that’s the crux of what you’re getting in and I agree in part.
Listen to the one-hour episode or see the Complete transcript at www.VoicesinAI.com
And we’re starting to find that actually the solutions that you get they are tremendously useful, however they do have a small amount of the quality of climbing a tree or climbing a hill. There is a bunch of recent work suggesting… essentially they’re looking at feel , thus a great deal of solution for supervision is looking at the rough feel.
Some individuals are purists and they want to say this is AI, but this other issue is figures or regression or loops. In the conclusion is that we ’re hoping to create machines which may make decisions the way our conclusions are complex and we do. Our decisions are less complex, but it really is about how can we model the entire world, how can we take action that actually drive us?

[voices_in_ai_link_back]

There’s this idea that if you would like to go to the moon to visit the moon–to get to the moon–would be to scale the mountain.

That’s a question. I think we don’t necessarily must have 1 definition. I believe people get wrapped up about the words, but in the close of the day, what makes us intelligent, what makes other organisms on this planet smart is the ability to absorb information regarding the environment, to build versions of what’s likely to happen next, to forecast and to make actions that help achieve whatever goal you’re trying to achieve. And when you look at it that way that’s a fairly broad definition.

Some of the job we re doing is building systems where we use neural networks to extract structure from these noisy, messy inputs of distinct and vision modalities but really having symbolic AI systems. Symbolic AI systems have been basically contemporaneous with neural networks. Neural networks deep learning is in any way… everyone knows that this is a rebrand of their neural networks from the 1980s that are unexpectedly powerful again. They are powerful for the first time since we’ve got sufficient data and we have sufficient calculate .
At the center and I believe this is where the interesting stroke is, there’s this idea we’ve got a Broad AI and that I believe that’s really where the bets are today. How do we have systems which can go beyond that which we have that’s narrow without getting hung up on these notions of what’General Intelligence’ might be. So such as having things are interpretable, having programs which may work with various sorts of data that may integrate knowledge from other sources, that’s sort of Broad AI’s domain. Broad Intelligence is really what the lab I lead is all about.
At the conclusion of dayour brains are computers. It ’ s one I believe is well-grounded although I think that ’ s a statement that is contentious. It’s a very sophisticated computer. It happens to be made out of materials. But in the day’s end, it s a efficient, tremendously powerful parallel nanoscale classical computer. All these are like nanotechnology. And to the extent that it is a pc and to think to the extent that we can agree about that, Computer Science provides us equivalencies. We can construct a computer. We do not need to emulate the hardware. We overlook ’t have to copy the brain, but it’s kind of a specified that will eventually be able to do whatever the mind does in a pc. Now of course all that’s off, I think. Those are not the bets –these aren’t the battlefronts we’re working on now. However, I think the sky’s the limitation of where AI could go in terms.

No. As soon as we think about Broad AI, we’re thinking about just a bit’press on the button, then don’t throw away things that work.’ Deep learning is a set of tools that’s exceptionally powerful, and we all ’d be kind of foolish to throw them away. But when we consider Broad AI, what we’re really getting at is how can we start to make contact with that deep structure on the planet… like commonsense.

Right. Exactly.
Would you say like a narrow (AI) gets somewhat better then a little better, a little better, somewhat better, a little better, then, ta-da! That this version of,’Hey allow ’s let ’ and require a good deal of data concerning the past s research it very carefully to learn to do one thing’ is different than anything General Intelligence is going to be.
You said Narrow and General AI, and also this classification you’re placing in between these is broad, and that I have an opinion and that I ’m curious of what you believe. At least with respect to General and Narrow that they aren’t on a continuum. They’re technologies. Can you agree with this or not?

It’s funny, the AI term also. I m a recovering academic as you said. I was at Harvard for many years and I think as a field, we were really uncomfortable with the term’AI.’ Thus, we desperately needed to call it anything else. Back in 2017 and earlier we desired to call it’machine learning’ or we desired to call it’learning’ [to] be more specific. However, in 2018 for whatever motive, we all just gave up and we just embraced this term’AI.’ In some ways I believe it s wholesome. But when I joined IBM some framing actually really pleasantly surprised me the firm had completed.

So again, this is in the spirit of’donneural-networks throws away.’ They re in extracting certain sorts of statistical arrangement from the world, amazing — convolutional neural network does wonderful job of extracting data. Recurrent neural networks and lSDMs do a wonderful job of extracting structure from language that is natural, but construction as first-class citizens at a system that combines those in systems.
Listen for the one-hour episode or see the Complete transcript in www.VoicesinAI.com
So basically, what we have today like deep learning, tremendously powerful technologies, machine learning are going to interrupt a lot of things. We predict those Narrow AI and that I believe that narrow framing actually calls attention to the ways in which if it’s powerful, it’s fundamentally limited. And then on the other end of the spectrum we have General AI.   This is a term that’s been in existence for quite a while, this notion of systems which can decide what they want to do for themselves which are widely autonomous and that’s nice. We re not there as a subject although These are really intriguing discussions to possess.

And in now ’s AI, if we’re plain concerning matters, is deep learning. This version… what’s really been successful in profound learning is supervised learning. We train a model to perform every component of seeing based on classifying items and you classify a lot – many graphics, you have lots of training information and a model is built by you. And ’s what the version has ever seen. It must learn from those pictures and from this task.

And then computers may actually do a few of these things better than we can even in arithmetic and solving logic issues or math problems. Then we proceed towards believing that perhaps it’s emotion. Maybe emotion is what makes us uniquely human and rational. It was a sort of narcissism I believe to our own view that is justifiable and understandable. How are we unique in this world?

IBM does this matter called GTO or the Global Technology Outlook which occurs every year and the company tries to figure out–research plays with a very major portion of this –we attempt to figure out’What does the future look like?’ And they came up that I really enjoy for AI. They did something. They place some adjectives and I think that it explains the debate a whole lot.

About this Episode

Byron Reese: This is Voices in AI, brought to you by GigaOm and I am Byron Reese. I’m excited about today’s show. Now we have David Cox. He’s the Manager of the MIT IBM Watson AI Lab, which is part of IBM Research. Before that he spent 11 years teaching in Harvard, interestingly at the Life Sciences. He holds an AB degree from Harvard in Biology and Psychology, and he also holds a PhD in Neuroscience. Welcome to the series David!
We’ve got all kinds of sense. I know things about the world – straightforward things. And what we take for granted just like I know my desk is likely made from wood and I know that timber is a sound, and solids may ’t pass through other solids. And I understand it’s probably flat, and if I put my hands out I would have the ability to orient it in a position that would be appropriate to hover above it…
I believe things like that, in many ways a great deal of the symbolic thoughts, sort of operations, planning. They re also very powerful methods, but they haven’t been able to shine yet partially because they’ve been waiting for something–only how neural networks were waiting for compute and data to come along. I believe in many ways a number of these symbolic techniques have been awaiting neural networks to come along–because neural networks may kind of bridge which [gap] from the messiness of these signals coming into this sort of symbolic regime where we can begin to really work. One of things is building these systems that can bridge across that gap.
And also you ’ll get nearer, but you about the ideal path. And, maybe you ’ d be better off on top of a building or a rocket that is tiny and perhaps go as large as the tree or as high as the mountain, but it ’ ll get you where you need to go. I really do think there’s a taste of that with now ’s AI.
David Cox: Thanks. It is a great pleasure to be here.

But I believe in many ways we. Even you look at reinforcement learning–those systems have a notion of reward. I don’t even think it’s such a reach to believe we’ll in a sci-fi world have machines that have perceptions of pleasure and hopes and ambitions and things like that.


You will find these affordances and all this simple commonsense stuff that you don’t get when you do brute force understanding. As soon as we think about Broad AI, we’re thinking about is’How can we infuse that knowledge, that comprehension and that commonsense?’ And one place we’re excited about and we’re working here at the MIT IBM Lab is this idea of hybrids that are neuro-symbolic.
It’s not a version of General AI. Can you concur with that? You’ re doing them smarter and more expansively and bigger and better and only essentially taking techniques we have, or is not the situation?
There’s a lot in there and I concur with you. I am not really that interested at the end that is low and what’s the lowest pub in AI. What makes the issue to me is interesting the mechanism by and can that intelligence require a mechanistic reductionist perspective of earth? In other words, is that something that you think we’re going to be able to duplicate either… in terms of its function, or are going to be able to build machines which are as versatile as a human in intelligence, and creative and would have emotions and all the remainder, or is an open question?