no.

I don’t believe that’s accurate. And the things that you ’ re mentioning aren ’ t really mutually exclusive either, therefore in language processing shared sense databases are often used by you or you ’ re actually helping do data extraction to be able to fill out those databases. And you can also use transfer learning as a general technique that is in deep learning versions right now, powerful.

So I don’t actually know where I’m heading with this other than, do you really feel as though you could say your field of practice is among the more mature, like hello, we’re doing our bit, the rest of you common sense people over there and you models of earth over there and you transfer learning people, y’all are falling behind, but the computational linguistics people–we have everything together?
So… everything you’re mentioning is pertinent for this task of saying something and having your device on your desktop understand everything you’re talking about. And that entire procedure isn’t only merely comprehending the words, but it’s accepting those words and then mapping them into a sort of user intent then being able to act on that purpose. That entire pipeline, that entire process involves a ton of unique models and needs being able to make queries about the entire world and extract data based on… usually it’s likely to be the content words of this phrase: so nouns, verbs things that are conveying the principal sort of thoughts on your utterance and using those so as to find relevant info to this.

About this Episode

Deep learning models are used in natural language processing in addition to image processing as well as a ton of other stuff.

I wanted to figure out: is there a method I could enjoy study things on earth and then create a noun phrase? So I was kind of playing with this idea of’How could I create?’ And that was before I knew about natural language processing, that was before this new wave of AI interest. I was just kind of playing around with attempting to perform something that has been humanlike, from my understanding of the language worked. Then such as mock up some basic examples of how that could work if you had another knowledge about the sort of things that you ’ re attempting to talk about — I found myself having to code and things to get that to work.

[voices_in_ai_byline]

And once I started doing that, I realized that I was doing basically what’s called natural language creation . So things and creating phrases such as that based on input base or some input information, something like this. And so after I started getting into the natural language creation world, it was a slippery slope to acquire into machine learning and then what we’re calling artificial intelligence because those kinds of things wind up being the techniques that you use in order to process language.

Well I also find that in ways, our capacity to process linguistics is ahead of our ability in most cases to do something with it. I am able to t state the titles out loud since I have two of these popular devices in my desk and they ll answer me if I said them, but they understand what I m saying. But the amount to like if I state “ exactly what ’ s bigger — sunlight or even a nickel? They never get it. And they usually understand the sentence.

Right. So ’s speech processing. And that has to do with a bunch of things including how nicely that the speech flow has been examined are going to be different based on the type of apparatus you’re using. And a lot of times the frequencies are cut away. And so words that if [spoken] face to face or seems that we hear face to face really readily are sort of muddled. And so that ends up particularly on things like phones cutting a lot of these frequencies that are higher that assist those distinctions. And there’s like general training problems, so depending on exactly what the information represents and who you ’ ve educated on, you’re likely to get different kinds of strengths and flaws.

That I ’m always intrigued by how folks make their way to the AI world, because a great deal of times that which they study in University [is so diverse ]. I’ve seen neuroscientists, I see physicists, I see all kinds of backgrounds. What was the path that got you to intelligence and to computational linguistics?
Right.
Listen to the one-hour episode or read the full transcript at www.VoicesinAI.com


Why is it so awful?

Listen to the one-hour episode or see the Complete transcript in www.VoicesinAI.com


So I followed a path very similar to I think some other individuals that ’ve had sort of linguistics training then go into natural language processing which is sort of [the] employed area of AI, focusing especially on processing and comprehension text in addition to generating. Once I was an undergrad and I had been fascinated by noun phrases. So that’s things that refer to person objects in the world and things like that.  
Plus it’s just got 36 choices.
[voices_in_ai_link_back]
Right. So the Turing Test as was construed has this notion that the individual who’s judging can’t tell whether or not it’s machine-generated or even human-generated. And there s plenty of ways to do this. That’s not precisely what we mean by individual level functionality. Therefore, for instance, you could trivially pass the Turing test if you pretended to be a system that doesn ’ t understand English well, right? So you could say,”Oh this is a that is a person behind this, they’re just learning English for the very first time–they could get some things mixed up”
Right.
Thank you for having me.
So the Turing test… if I could ’t tell if I’m speaking to a person or a machine, you got to state the machine is doing a pretty good job. It s thinking based on Turing. Can you think passing the Turing test would actually be a watershed event? Or do you believe it s not the kind of thing you care about one way or another, and that ’ s more like hype and marketing?

So my question is: I always hear those things that say “computers have a x-ty-9% point whatever accuracy in transcription” and that I fly a lot. My frequent flyer amount of choice has an H, an A and an 8 inside.

My guest is Margaret Mitchell Now. She’s a senior researcher at Google doing work. And she studied linguistics in the University of Washington in Computational Linguistics and Reed College. Welcome to the series!

And I’d say it gets it right.