Rev Lebaredian

Rev Lebaredian, Vice President of Simulation Technology at Bay Area based company, NVIDIA speaks about innovations in artificial intelligence, gaming, and robotics as well as how technology is impacting our humanity.



Transcript:


Ojig Yeretsian:This is Method to the Madness, a biweekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. I'm your host, Ojig Yeretsian. Today I'm speaking with Rev Lebaredian, vice president of simulation and technology at NVIDIA, where he leads gaming technology and simulation efforts. Welcome to the show, Rev. What is VR?


Rev Lebaredian:Well, VR stands for virtual reality, obviously. What most people imagine when we say VR are these clunky headsets that you put on your face or some little receptacle you place your phone into before putting it on your face. VR is actually something that we've been experiencing throughout mankind from the very beginning. All of our perception actually happens in our brains. You're not seeing with your eyes, you're seeing the world around you interpreted through what your brain is actually doing. When we sit around and we talk to each other like we are right now, [inaudible] elephant, and you just got an image of an elephant in your brain. There's not one around here. You conjure up this image and that's me incepting this image into your brain a virtual reality that we're constructing. Here we are talking, having this conversation, we're constructing a reality amongst ourselves.


These new versions of virtual reality that we're starting to see are just a more direct way to create an immersive virtual reality experience. It's not actually the end yet. We're not totally at the end of this thing, it's just one of the steps along the way. Humanity has figured out ways of creating this virtual reality, this just communicating, telling stories to each other verbally. Eventually we had books, you can write them in there. You could do recordings like the one we're making right now, movies, video games, but the end game is going to be where we can start communicating even without words, potentially. I highly recommend you look up Ken Perlin from NYU. He's one of the greats of computer graphics, where he describes what virtual reality means to him. I completely agree with what he's saying. My piece in this is construction of virtual realities and virtual worlds through simulation, that's fundamentally what we do at NVIDIA. Our core as a computer graphics company, we power most of the computer graphics in the world, at least the serious stuff.


Constructing these virtual worlds so we can inject them into these virtual realities is what our currency is.


Ojig Yeretsian:What is AR?


Rev Lebaredian:They're actually related. So, virtual reality is a new reality that you create that you're completely immersed in, but it's on its own. AR stands for augmented reality. Another term is mixed reality, MR. Some people use that term instead. Currently we're in a reality of our own right here. We're sitting in this room talking to each other and I'm perceiving you sitting there. Mixed realities or augmented realities are ones where I can blend in other realities into this world more directly. The current manifestations of this, the beginnings of AR, we're seeing through your phones. I mean, every iPhone and Android phone nowadays has something, that crude thing we call AR, where you can point your phone at something in your environment and it creates a digital representation of some reality mixed into it. The first one to make this popular, the first app, was the Pokemon Go. It was very cool but still extremely crude. A few years from now it's going to be far more compelling and far more immersive.


Ojig Yeretsian:AI versus deep AI.


Rev Lebaredian:These terms are very contentious. What is AI? What is intelligence? We still haven't really defined that. Generally speaking, when we colloquially speak about artificial intelligence today we're talking about algorithms. Computers doing things that we used to think only humans could do. We've been going through series of these things throughout computing history. One of the first challenges that we had for computers that we thought only humans would be able to do is playing chess. In the 90s, Garry Kasparov, the world champion at the time, was beat by Deep Blue. It reshaped what we thought computers could do and what is the domain of humans. Interestingly, it didn't kill chess which is what one of the things that people assumed would happen once a computer wins. Turns out, we don't really care what computers can do. We mostly care what humans do. So, I'm sure we'll make a robot one day that could play basketball better than any NBA player, but that won't kill basketball.


Ojig Yeretsian:It won't replace it, no.


Rev Lebaredian:We have people that run really fast and we really care about how fast they can run, and we go measure that at the Olympics, but just because cars exist or even horses that can run faster, it's just not particularly interesting. What we've assumed all of these years, that there are things that only humans can do. It's something special. So, we've defined artificial intelligence as the things that computers can't do and that humans do. We're inching along over here, occasionally make big steps. We have computers do things that we thought would be impossible. The big one in recent history, it was around 2011 in Geoff Hinton's group at the University of Toronto, there were a few grad students, they took some of our processors, our GPUs that were used for gaming and they were able to use a machine learning, a deep learning algorithm to train, to create a new algorithm to do computer vision. To do classification of images. There's a longstanding contest called ImageNet where all these computer vision experts in the world would have their algorithms compete with each other to see who could get the highest accuracy classification.

Look at an image and you say, "This is a dog. This is a blue bicycle." Traditionally extremely hard problem. It's been there since the beginning of computer science. We wanted to solve this problem. At first we thought that it would actually be pretty simple and then we realized it's extremely hard. I mean, I've been coding since I was a little kid. I never believed I would see the day when a computer would be able to tell the difference between a cat and a dog properly. This magic moment happened when these grad students took their gaming processors and they applied an older algorithm, but modified, using the computing available to them. This extreme performance that they could get was a super computer inside their PC, afforded to them by the fact that there's a large market that wants to do computer games. They took that and they created a new kind of algorithm where instead of them writing an algorithm directly, they trained this algorithm. They fed data into it which was only available because the internet had existed long enough for us to have these images to begin with.


They shattered all the previous records in terms of accuracy. A few years later these algorithms started to become superhuman, and by superhuman I mean humans when they look at these images are sometimes not accurate. They don't know exactly what kind of dog is in the image, or maybe sometimes they think it's a dog but it's really a hyena in the dark. Humans make mistakes but now the algorithms are superhuman. Before that moment we believed that only humans could do that kind of classification, but that changed. That changed over night. Now computers are actually better than us for doing that. What does that mean? Is that intelligence? It's hard to say but the trend, if you look at it, we keep figuring out new ways to make computers do things that we didn't think was possible. It's happening so fast. If you extrapolate, you imagine maybe at some point we will have machines that are superhuman in a lot of the things that we consider the domain of humans. Emotions, humor, things that we call human. Or, maybe not. Or, maybe they'll be some other thing that we don't quite understand.


Ojig Yeretsian:What are you working on these days?


Rev Lebaredian:I've been here for almost two decades. I really found my calling when I was around 10 or 11 years old. I saw this image in an [inaudible] magazine of two spheres, these balls, floating above a checkerboard floor. They looked so strange. I'd never seen anything quite like it. I couldn't make out whether it was drawn or whether it was some kind of weird photo of something. I read a little bit more and I realized that it was an algorithm that produced that image. That it wasn't actually drawn by someone, nor was it real, or a photograph of something. I was hooked. This image was created by Turner Whitted, who invented ray tracing back in 1980. He published [inaudible] on this. Luckily I got to work with Turner years later. He was with us until he retired recently at NVIDIA doing some amazing work. I got to tell him that, that the reason I was there at NVIDIA working with him was because of that image.


What really excited me was that I could finally draw without having to know how to draw. I could use the tools that I'm good at, which was programming a computer to produce these images.


Ojig Yeretsian:If you're just tuning in, you're listening to Method to the Madness, a biweekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. Today's guest is Rev Lebaredian, vice president of simulation and technology at NVIDIA. He's speaking about gaming technology, robotics, and artificial intelligence. 


Rev Lebaredian:So, what is computer graphics, what is a digital image that's been constructed? Basically, computers aren't really drawing or drawing in the traditional sense. What we have that computers do is through simulation. We have some understanding of how light works and the physics of light, and the images that you see are the products of this simulation that's happening around us in the real world. We're trying to approximate that. Light travels through space. It interacts with matter that's present all around us. It reflects, it absorbs, transmits, it refracts, it diffracts. There's all of these things that happen, and so what we do with computer graphics is we try to get as close as possible to what reality is and simulate that. So, those images that we're producing for a video game, or for the Avengers movie many of the people probably just went and saw, it's fundamentally a simulation of the physics of light. 


When NVIDIA started before I joined, our CEO Jensen Huang who's probably the smartest person I've ever met, he realized how important the computer graphics is, the simulation of light, but also realized that it's important to find a large market that could support the development, the amount of R and D that goes into creating something like this. Previous to then, most of the companies doing really advanced graphics were in fairly niche areas like making movies, or professional CAD design and stuff like that. What we did was we took this to the masses through video games. Realized people love playing video games. What we're creating in a video game is a simulation of some world, and in this world you have to do the simulation of light. That's the graphics that we produce, and you have to do it really fast because it has to be interactive. We do it in a 60th of a second instead of the hours it takes to produce one of the frames in the Avengers movie. 


We have to simulate physics and the interaction of objects, how they collide with each other. We have to introduce some kinds of AIs to drive the opponents or the virtual cohorts and people you have on your team. You need to collaborate with other people or play against them and deal with the interaction of people in these virtual worlds and large distances between them. They may be on the other side of the globe. They have to interact with each other and make it all feel like they're present there at the moment. Video games are actually the hardest problem, if you think about it, for computer science because you have to do everything in order to make the best experience. One day when we have the ultimate video game experience, it'll feel no different than being in reality here. We're actually going to feel like we're inside it. That's the ultimate game. 


So what Jensen realized was that there's demand here, and the fundamental technology needed to create that is one that's important for mankind in general, but you need this large market in order to pay for the development of this thing. There's an entertainment purpose over here that's large enough where we can afford every generation GPUs we create. It's $3, $4 billion dollars that we invest in creating that. None of the other single markets can support the development of that, but through video games we get this core, and then we can have adjacence. Simulation for robotics, for autonomous vehicles, for design of products, for collaboration. Maybe one of these days we'll be doing an interview like this inside a virtual reality that's powered by that same gaming technology. So, my team is focused on building the tooling and the fundamental technologies at that layer to create these possibilities with these applications. Whether they be video games or simulation for some of the things I mentioned like robotics and autonomous vehicles. 


Ojig Yeretsian:What are some of the problems you're trying to solve?


Rev Lebaredian:There's a whole lot of them. We still haven't solved rendering. Simulating light is really, really hard, and then doing it fast is even harder. We understand the principles of light, physics, well enough so that we can do approximations but what we have to do is simulate billions and billions of photons bouncing around in a scene, and figure out which ones hit your sensor whether it's your eyeball, or a camera that you're modeling. Doing that extremely fast, in a 60th of a second, it's hard. Even the best that we do for movies, which don't have that restriction, they can afford to have supercomputers. Thousands of computers they put in the data center to calculate those final pixels that you end up seeing in the movie theater. They can spend hours and hours, or even days rendering a single frame. We have to do that in a 60th of a second in real time. So, the first problem that's on my mind always is, how do I take the things that we are doing that take hours for a film and make it so that we can do it in a 60th of a second? 


Once we can do that, then we can really approach, get close to making a virtual reality that's believable. So that if I stick you in this virtual reality, you might not actually know that you're in it. 


Ojig Yeretsian:Sounds to me, from all that we're talking about, is that the future is coming faster and earlier, and it's forcing us to contend with our understanding. It's like a culture shift. It's like a paradigm shift for us. AI is already here. There's technology to do gene editing. There's facial recognition, there is amputees with robotic limbs, sensors on the steering wheels for cars that if they sense that you're getting sleepy or your mood is changing, the car will start talking to you to keep you awake and engage you. These are all these that were unimaginable.


Rev Lebaredian:There's a lot of technology we're building inside the car, not just for self driving cars, but for assisting drivers. Technologies like that where we have cameras in there that can see if your eyelids are drooping or if you're agitated, and try to help you, it's remarkable.


Ojig Yeretsian:To help reduce road rage perhaps. Sebastian Thrun developed machine learning algorithm to help diagnose cancer, and that radiologist's role is going to change as a result of this. That they're not going to be necessarily replaced, but they're going to have augmentation of what you mentioned, with classifying and reading of the CAT scans and the MRIs and the X-rays, and do better classifying, and the radiologist will be more of the cognitive end of thinking about disease. So, how do you see technology impacting our lives and humanity?


Rev Lebaredian:Understandably, all of this technology happens so fast it's scary. It's even scary for me even though I'm in the middle of it. It's happening at a pace that mankind hasn't experienced before, so it's hard for us to just digest how fast it's happening, what the repercussions are to each of these things. So, we have to be very careful about how we integrate technology into our lives, and really be thoughtful about it and not just assume that they're by default good. Technology is neutral, but the application of it isn't necessarily, right?


Ojig Yeretsian:Yeah.


Rev Lebaredian:That being said, one of the biggest fears is that AIs are going to make people obsolete. I just don't see that. It doesn't make sense to me that we would feel that way. A lot of the things that we think about are manufacturing jobs, and stuff that robots can go replace. If you look at it traditionally, those jobs didn't exist to begin with. It's kind of weird to think that the pinnacle of mankind is a human standing in an assembly line, toiling away hour after hour doing mundane, monotonous tasks. We were mechanizing mankind, which is odd. Humans are creative, they're wonderful creatures that are interesting. We should try to do everything possible to make it so that they can reach their potential without having to do the mundane and monotonous things. 


We were just discussing virtual worlds and simulating them, but one of the bigger problems actually with virtual worlds is the creation part of it. Creating a virtual world is extremely expensive. It takes thousands and thousands of people to construct a really large virtual world experience. One of the most important ones in recent times is a game called Grand Theft Auto V. It was released in 2013, I believe. If I recall, they spent about seven years building this game and they had, at some points, probably 1,000 artists constructing this virtual world. It's still extremely popular. People play it all the time. If you go search on YouTube, you'll find millions of videos of people creating movies inside the Grand Theft Auto world. They take it and they modify it and they insert their own characters, they put Marvel superheroes in there. The reason why it's so popular is because it is the most accessible, the largest virtual world that you can go access that's of high quality, but it took 1,000 artists seven years to create this. 


It's a micro version of Los Angeles. They call it San Andreas in there, and it's great but it's nowhere near what we really want. Something that's as rich as the real world we live in, and even more, except we've reached the limit. There's only so many hundreds of millions of dollars you can put into creating these virtual worlds. So to construct them, how do we take these thousands of artists and augment them with AI tools, not so we can put them out of business, but so that they can create not just this little micro version of Los Angeles but they create the whole globe? So that you can go walk into any building, into any alley, into any basement and it's detailed, and rich, and filled with all of the objects that you would expect there to be in the real world. It'd be based on maybe the real world. We can take our Google Maps data that exists, satellite data, and use AI to go augment that and build these worlds out.


When we introduce these AIs, I don't believe there's going to be a single artist that goes out of business. What we're going to do is we're going to take away the monotonous task of handcrafting every single piece of geometry, every single little thing in there, and I think that's what's going to happen in general. Now, the scary part is when it happens fast. There's this period where you have people who have been doing something for a long time. Sometimes they're not even capable of adjusting to the new thing, so there's pain there. We need to get better at that as a society. How do we make people not dependent on one specific task as their job or career their whole lives? People should be adaptable, and creative, and we should be progressing together and learning to do new things. 


Ojig Yeretsian:So, you believe that we're not prepared?


Rev Lebaredian:I don't think so, and I particularly don't think we're prepared here in the US. We're actually notoriously bad at dealing with new technology. If you look at it in the political landscape, I don't think we have leaders in politics that truly, really understand what's happening as we speak, and there's no plan for this. Hopefully that'll change soon. There are of course smart people in government, in our various agencies and whatnot, but just in terms of leadership you could see it any time congress calls tech leaders to-


Ojig Yeretsian:Fly them out there [crosstalk].


Rev Lebaredian:Summon them out there to talk. There seems to be no understanding or even respect for what it is they're talking about.


Ojig Yeretsian:The European Union has the General Data Protections Regulation. Article 22 that states Europeans have a right to know how an automated decision involving them was reached and a right to know how an automated process is using their personal information. Is this something that you welcome?


Rev Lebaredian:Well, I welcome governments thinking about these things. I don't know if the particular way they've implemented is the best, but at least they're doing something. We comply with all those, and as far as I can tell so far there hasn't been any negative repercussions except we had to do extra work to go comply with them. All of those things are important, but I think something is necessary and society should be engaged. These are important questions.


Ojig Yeretsian:There's a lot of concern that machines are making decisions instead of people, and that there's an inherent bias embedded within algorithms. Is this something you encounter in your work?


Rev Lebaredian:The algorithms that we deal with usually are not probably the ones that you're thinking about there. We're not Facebook or Google where we're dealing with peoples' personal information and social media. So, bias to us means something else. It's this car thinks there's a lane to the left here versus to the right. Something like that. That being said, I'm actually less worried about machine bias than I am human bias. Human bias we definitely know exists and we know it's really bad. Machines might have bias right now, but we know how to fix that, and we know how to test it, and we know how to measure it. I don't think we know how to fix humans yet as far as their biases are concerned. I can imagine that sometime in the future, maybe not so far future, we'll have judges and arbitrators that are AIs that make decisions. I trust them to make a decision on a criminal case involving a minority holding up a liquor store or something like that over most of the judges that are currently in place, and probably do it in a far less biased way. 


Ojig Yeretsian:I've heard the example of in a hospital exam room, where a machine assisted healthcare is actually reducing the numbers of hospital acquired infections and sepsis. I had never heard it on the more moral and [inaudible] realm such as the judicial system.


Rev Lebaredian:Yeah, we trust humans to be arbiters of things that they probably have no business doing. I'd rather have an algorithm or math to decide these things.


Ojig Yeretsian:What could go wrong?


Rev Lebaredian:The work that I'm doing is actually to help us solve these problems before they cause harm. Simulation is actually the key to do that. So, one of the most direct examples is a simulation we're doing for autonomous vehicles. Before we put these cars out in the road and really sell them to people, we need to make sure that they're going to work well in every possible environment and every possible situation. With other crazy humans around them, driving around doing crazy things. There's actually no good ethical way to do a lot of the tests we would really like to do. How are you going to be sure that the self driving car doesn't run over a parent pushing their baby in a baby carriage when they go out into the road without looking both ways? Can't test that in real life. We can try to mock it up with some cardboard cutouts of those humans or something like that, but it's not the same thing.


Ojig Yeretsian:Yeah, it's scary.


Rev Lebaredian:So, all this work that we're doing to construct these virtual worlds and do them in real time, that ends up helping us here. We need to put humans inside these worlds that we test our cars in, and have them drive millions of miles and fool these cars. We're building a brain for this car that perceives the world and decides to act upon it. Our simulators are virtual reality for those car brains. We produce these graphics and pipe those pixels directly into the sensor inputs on the computer that's running inside the car, and the car, if we do our job right, doesn't really know the difference between reality and the virtual reality we're giving it. So, if we can simulate it beforehand, the better we can do these simulations, the higher fidelity simulations, we have a better chance of averting some of the really tragic things that might happen. We can all imagines what happens if an autonomous vehicle goes awry, but I'd actually argue that we already know what happens when humans go awry. There's plenty of-


Ojig Yeretsian:Examples.


Rev Lebaredian:Plenty of bad drivers. I'm sure you've experienced some of them driving out here earlier.


Ojig Yeretsian:Absolutely.


Rev Lebaredian:So again, I think in a lot of these realms, best chance is to make algorithms that are less biased and not as flawed as humans.


Ojig Yeretsian:How might this create a better world?


Rev Lebaredian:That's a good question in general, and what does that mean even? A better world. I think there's some simple metrics of better worlds. They have less babies dying. That would be a good thing. People living longer, more people with enough food in their bellies so they don't have to worry about it. People getting educated so that they can keep their minds busy. Without technological progress, we wouldn't be where we are today. I know things seem pretty crazy, but it wasn't that long ago that a good portion of our babies used to just die at birth, and the mothers along with them. We take it for granted now. Babies are born early, like my sons, they were born weeks early. That would have been a death sentence for them before, but they're alive and kicking right now and thriving because of technology. Everything that we're doing, there's the dangerous aspect of it, but generally the world has always gotten better as a result of it.


Ojig Yeretsian:What's exciting for you in terms of new technologies? What do we have to look forward to? 


Rev Lebaredian:Well, in the near term the things that we were just discussing, the things that I've been working on for the past few decades. In terms of virtual worlds and computer graphics, I feel like we haven't realized the full potential to them. We've been primarily using them for entertainment, which is great, but we're almost there where we're going to start weaving these virtual realities into our daily lives. 40, 50 years ago the average person didn't have a video camera. The average person barely had a camera, and if they did, it wasn't something that they could use all the time. To go get film developed, it was expensive and cumbersome. You look at our children now and they're all videographers, they're all photographers, and they're creating content and worlds themselves. Everybody is. I want to do the same thing for 3D worlds, for virtual worlds. I want to get to the point where my grandchildren hopefully, hopefully before but at least my grandchildren, are going to be able to construct virtual worlds that are more complex, richer, and more beautiful than what Grand Theft Auto has done or what we saw with Avengers: Endgame.


By using whatever device is there or just by speaking, I want to see my grandchild step into a virtual world and say, "I want a forest here," and a forest appears. "I want a stream with a unicorn jumping over the stream." Just describe it and have this world unfold in front of them. Once we get to that point, I can't even imagine the things that people are going to do with it. So, that's the thing that gets me excited.


Ojig Yeretsian:How can folks get more information about your innovative work?


Rev Lebaredian:Well, you can definitely go to our webpage and all our social media feeds. NVIDIA.com or find us on Facebook and Twitter. If you're a developer or into the technology directly, we have Developer.NVIDIA.com where we provide most of the technology I've been speaking about directly for free for people to download and go incorporate into their tools. One of the most interesting things I've ever worked on, and my passion right now, is a new project that we just announced that we kind of hinted at about a month or two ago. We call it NVIDIA Omniverse. It's a platform that we're building that allows for a lot of the things that I've been talking about here. We want to connect various tools in different domains whether you're an architect, or a product designer, or a video game creator, or a director for a movie. All of these domains have different tools that they use to describe, things that are actually quite similar. They're constructing objects, and worlds, and scenes.


So what we're building is a platform where all of these can come connected together, and we can allow people to create these worlds together using the tools that are specific to their domain. We showed an example of this. We called it the Google Docs of 3D. Just like how you can go and edit a spreadsheet with your colleagues or friends simultaneously, we want to provide these and we are starting to provide this for people creating 3D worlds. So, you and I can be in completely different parts of the globe using our own tools. You might be using a tool to paint textures on a model, and I could be using a tool to construct a building using something like Revit from Autodesk, which many architects use. We can be collaborating together, building these worlds together. So, you can go check that out if you search for NVIDIA Omniverse. We're doing some cool stuff.


Ojig Yeretsian:Thank you so much, Rev.


You've been listening to Method to the Madness, a biweekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. You can find all our podcasts on iTunes University. We'll see you again in two weeks.


Hosted on Acast. See acast.com/privacy for more information.