SPEAKER 1: This is a production of Cornell University.
CHARLES JERMY: Welcome to the last of the summer lectures. That means tomorrow it will start to snow. Not really. If there's an emergency, please remember where you came in. Or exit through one of these exits. Again, I want to thank the College of Agriculture and Life Sciences for the loan of this wonderful hall. They've been very kind to us.
My name is Charles Jermy. I'm the associate dean of the School of Continuing Education. We're glad to have all of you here. Kavita Bala is a professor in the Department of Computer Science and the Program of Computer Graphics at Cornell University. She received her master's and PhD degrees from MIT and her Bachelor of Technology degree from the Indian Institute of Technology.
KB specializes in computer graphics, leading research projects in physically based scalable rendering, perceptually based graphics, material perception and acquisition, and image-based modeling and texturing. She has coauthored the graduate textbook Advanced Global Illumination, now in its second edition. And she has authored or co-authored at least 90 professional papers.
KB chaired the SIGGRAPH Asia 2011 conference, co-chaired Physics Graphics in 2010 and the Eurographics Symposium on Rendering in 2005. She has served on the Papers Advisory Board for SIGGRAPH and SIGGRAPH Asia and as associate editor for Transactions on Graphics, Transactions on Visualization and Computer Graphics, and Computer Graphics Forum. Her 3D work-- I mean, her work, but it is 3D. Her work on 3D Mendeleev was featured at the Rubin Museum of Art in New York. KB has received the NSF Career Award, Cornell's College of Engineering James and Mary Tien Excellence in Teaching Award in 2006 and 2009, and the Affinito-Stewart Award. KB, Virtual Realization-- Realism, I'm sorry. Realism and Computer Graphics.
KAVITA BALA: Thank you all for coming in from-- there's a glorious day outside and yet you're here. So I appreciate your coming in here. I'm going to start-- what does it mean, virtual realism? And why does computer graphics have anything to do with it? So I'm hoping to tell you a little bit about this. And a little bit about the role Cornell has played in the field of computer graphics and the impact it's had on the world. So I'll speak broadly about, actually, all of the research that's been done here at Cornell in the graphics field.
So last year you may have seen many headlines like this one. "Facebook Buys Oculus, Virtual Reality Startup, for $2 billion." And you may have thought, for that kind of money, I want to know what the meaning of life is. And they're very unlikely, unfortunately, to tell you that. However, the buzz has been relentless. So I'll try to tell you today, at least, what they're about.
And so what kind of buzz do they get? Things like this. "How Oculus Rift Won Comic Con." Or "Bit by Bit, Virtual Reality Heads for the Holodeck." You may remember what the holodeck was. It was the science fiction show, the Star Trek science fiction show, it was this immersive environment there that could do magic.
So are these guys really headed to do magic? I'll show you an example of how people use it. And you can see. I also have, actually, this is the Gear VR. At the end of the talk today you can come and try it out. You can do it with my cell phone, and we can show you a little demo. But let me just show you an example of somebody online looking at this.
So here on the bottom right, you see a user with the Oculus Rift gear. And in the top left, some idea of the scene she's viewing as she's viewing it. There's a bit of a lag, so you'll see that her responses are a bit delayed with respect to the video.
KAVITA BALA: Right. At some point she starts. So that's one video.
This is a five-year-old kid. I always like to see how kids react to this thing. And on the left you see the kind of images the kid sees. And you can see it's challenging his sense of balance. And one of the things I notice with kids versus adults. Kids will actually spin around the full 360 degrees that they're supposed to. Adults like to face forward. So kids actually get how you're supposed to use this better than the grownups.
So this is the kind of technology. That's a five-year-old. This is the kind of technology that he's just going to expect as given 15 years from now. And that's the world we're going to live in.
Let me just tell you briefly how it works, just a little bit. They're called-- they provide these goggles. Are supposed to provide stereoscopic 3D vision. So what does that actually mean? What it means is, in this device each of your eyes will be shown an image. And these two images might look almost like they're exactly the same. They're not actually exactly the same. They're a little different.
Different enough so that your brain, just like your visual system works, and you get two different images in your left and your right eye. And you fuse that together to get a sense of 3D in the world. That's what this device does. It gives you two different images for your two eyes, and your brain fuses it together to get a 3D image. That's one aspect that makes them so compelling.
The other aspect is, of course, this whole 360 aspect of it, which is immersive. You have sat in front of screens all your life, and you know that the field of view is kind of limited and small. Whereas in these things, you turn around and you see the scene there. And that sense of immersion is called presence by the true believers of VR. And that's what they want to achieve. The sense of presence where you forget where you are in the real world and you're actually in this virtual world.
So how these devices work is they look something like this. As I said, that's the cheap version. That's just a $200 version of the Gear VR. But they have a little strap that you put on your head. And the thing is you stick in your cell phone, which gives you two different images. And there's a bunch of circuitry. And that's it. That projects to your eye, and it tracks your head, and it can look where you're looking.
That's a $200 device. As I said, it's a cheap version. There's a cheaper version still. Not to be a outdone, Google has the cardboard version of this that you can take and cut out and make it at home. And then stick your cell phone in that. So if you want to go home and try this for $17, you can have the experience. But you can feel free to come up at the end and also try it out.
Games are a clear application domain. It's been touted a lot as games as being the ultimate domain. But there's actually other, more serious applications that people are thinking about, which I also think make this a particularly exciting time. So for example, clinicians are going to use this, for example, to simulate how somebody with Parkinson's-- what they experience visually. And use that to develop an understanding of what their patients are going through so that they have more of an understanding of what they're dealing with. That's one example.
Here's another area that you might not think of. Say you want to visualize the world at nanoscale. You're a scientist who wants to understand the world. Wouldn't it be great if you could strap on these goggles and have a first-person view of what it is to live at that scale? That's another example that people have raised.
This is a training application. They do it for troops. But also this is an emergency training application. The claim is that this sense of presence, because you're so immersed in it, you actually will experience the stress of, in this case, say, a crash landing, and will be able to work through your stress levels and stay calm and get trained up to do the right thing in such conditions. And more recently, this is an example from, actually, yesterday, where people are now using it to develop virtual cultural heritage type of applications or tourism. Where you can wander around the place that you will then take a vacation to and go to.
So there's a lot of buzz around it. And the hope is that this will actually get to play out in your lifetime and you'll get to enjoy all of these benefits. I want to step back and introduce another term. So this is virtual reality. Everything is virtual that you see. The content is virtual.
There's another phrase called augmented reality, which you might have also heard. And I'm going to now show you, give you a taste of a little of what people are envisioning in this augmented reality world. Now I apologize. I'm going to show you an ad from a company. The previous was also an ad from a company. Just bear with me. It does do a good job of saying how this might play out.
-We could go beyond the screen where your digital world is blended with your real world. Now we can. This is the world with holograms. What will they enable us to do? New ways to visualize our work.
-I have an idea for the fuel tank.
-New ways to share ideas with each other.
-How are things going your end?
-I just put the images in one drive.
-More immersive ways to play. New ways to teach and learn.
-So put the new trap in the place of the old one.
-Then tighten here and here.
KAVITA BALA: So this is augmented reality where your blending real camera input from the world you're in with some virtual input seamlessly so that you can do your work better. Now you may have worried that the woman was going to walk into a wall. But presumably we're all going to get very good at walking around with things strapped to our head and blending these things together. We're going to have to adapt.
That's one example. There are other examples. For example, MINI is designing augmented reality goggles. They look to [AUDIO OUT] The mic cut out, yeah. But they could be very useful. The idea is that if you wear these goggles, you can then sense your entire environment and so you can become a safer driver because you'll know when you might be heading into a collision or you're too close to the cars in front of you or behind you.
All right. So that gives you a flavor of what VR and AR are. What does computer graphics have to do with it? Well, computer graphics is the core technology that drives VR and AR. Computer graphics connects the virtual with the real.
So you saw, for example, that model of the motorbike. There was this woman who was designing the motorbike on the screen. And then there was also a real motorbike. And computer graphics let her basically blend these two images together so she could overlay her vision of her fuel tank on the real motorbike and have that sense of seeing how it would play out.
Now if you think VR and AR are too pie in the sky, it's going to take 15 years before we get anywhere, I want to just give you an idea of how computer graphics is in your life right now whether you realize it or not. You or some relative of yours I'm sure has at some point bought a piece of something from IKEA, from the IKEA catalog. A few years ago IKEA announced that in fact 90% of their catalog is virtually rendered. That in fact, it is not based on any real studio model.
And in fact, I'm going to use this mnemonic of putting a blue V for anything that is virtually rendered as not a real photograph. And this shocks most people because most people thought that they were looking at the catalog of a real scene. But actually it is computer graphics, and they never realized that it was not real. If you've ever worried about how your kitchen's never looked as perfect as theirs, now you know why. It's because theirs aren't real.
This idea of visualizing things that aren't actually real. Many companies are now getting into this. So for example, there's a company called Floored in New York City that is going to build such models of real estate for people who are trying to start new companies or get new homes. And they can visualize their entire layout before they actually step in. So here's another example. This is in fact interactive, unlike the IKEA ones, which take a bunch of time to render each image. But you can walk around how your office space might look before you ever sign up for that kind of interior design.
Closer to home. If you've seen Gates Hall, which is the Computing and Information Science Department, which is where I live. Before the building actually came up, the architect showed us visualizations like this. And this was not particularly pleasing to us. We didn't have a clear sense of what was going on. But as the project developed, they started showing us visualizations like this. And so this is a virtual rendering of that building long before the building actually existed. And we all circulated this. This really helped the department plan as to how they would interact in this building.
And just to give you an idea of how the building actually looks, if you haven't wandered down there to look at it. This is how it really looks. Now there are some differences. In fact, the architects changed what they actually built. However, it's pretty close. It's impressive how close it comes to the real thing. And so this is the art. The lighting is different. But the match is pretty good.
The idea of using graphics to visualize events long before they exist has much wider application. You know that Cornell has played a major role in space exploration. You've heard about all the Mars exploration we have done. Another domain where graphic simulation plays a big role is in fact in trying to simulate and visualize scenarios long before the mission actually takes place so that you can plan and you can anticipate different outcomes.
For example, graphic simulation was used to visualize what would happen for the deployment of this parachute before the mission. And then, of course, you still have to go to the planet. And all hell breaks loose.
In a place where fact meets fiction, you may have seen the recent movie Interstellar. And what you may not know that in fact the physics that were simulated in Interstellar was actually very carefully done. The movie makers collaborated with a Caltech physicist to try to correctly visualize relativistic effects. And the belief is that in fact this is the closest people have ever come to understanding how a black hole might look.
So that's how computer graphics feeds into science. But they wander also into entertainment. It's always played a big role in entertainment. So one example is this movie. You may have seen it. The Curious Case of Benjamin Button. The strip in the middle is the actual actor. And that's shown with the R there because that's how he really looks.
But in the movie, what they did is they simulated his appearance both when he got young and when he got old. And if you've seen the movie, you'll know that it actually was very well done. It did not violate any assumptions of yours, and you thought that looked realistic.
So that's an area where computer graphics has always played a big role. Another type of-- this is not so much realistic appearance. But graphics have always been used for animated movies. And I'm sure you've seen this. Here the goal is not to look real but to actually just be believable and actually relatable. So you want to relate to the characters, even if that plastic blob doesn't look like anybody you know in real life.
Movies invest a lot of money into making things look good. The other extreme of the entertainment industry is actually much more demanding. And these are games. And the reason games are more demanding is because the character-- that is, you-- can walk around to an arbitrary place. And you can wander into the part of the set that wasn't set up to look good. And so they're very demanding. But they have come a long way since the '80s or the '70s, when there was some progress being made in them.
So to give you an example. Here is the "Madden" football games. And I'll just let you see it for a bit.
-So they're going to take these chances. Give their guys a chance to win. And the bad blood? Wow, it sounds bad. But that's good. Because that means it's going to be physical on the field. It's going to be a lot of fun to watch.
-So that means LeMichael James will be back for the opening kick. And Steven Hauschka looks set now to kick it away.
KAVITA BALA: So this is all virtually rendered. And there are parts of the video where it's completely unclear that that's the case. Particularly in the group play, et cetera. They've done an amazing job of making it look real. But there are parts where also it completely violates there. You see the person, and you go, oh, that looks completely fake. How terrible.
So that's exactly the challenge. There's an arms race here. And games and movies are continuing to push the boundaries. So that 10 years from now you're not going to have that reaction. You're going to probably accept that it actually looks real. And you're going to struggle with trying to tell that it's virtual.
Here is another example from a game. This one is harder. It's tough to say which one is virtual and real. In this case, the left is real and the right is virtual. But actually, if you just look at the hills, if you look at that shed, it's very hard to tell. I had to look at the grass to finally figure it out, and I do this for a living. So it took me some time to figure out what was wrong there.
So how did we get here? Well, funny you should ask. It turns out Cornell has played a major role in creating the field of computer graphics. So this is an example from 1974. So you all know, of course, you know Johnson Museum. This was a visualization of the Johnson Museum before it was built. And this was done by Don Greenberg, who's the director of the Program of Computer Graphics here. And the Program of Computer Graphics has been a pioneer and one of the first labs in the country and in the world and has been a leader all along.
And what was amazing about this is people had not even thought that you could do this kind of visualization or you could use computers to do this kind of stuff. And this was done long before modern times, then modern computing on very primitive hardware. So it's really amazing that they achieved this way back then. And it created the field that we are now all enjoying reaping the benefits of.
So let me give you a little idea of what is the problem that computer graphics tries to solve. OK? So you want to produce an image. What do you need to do to produce an image? So first, let's say we're trying to model this car. The first thing you need to do is capture the shape of the car, some model that represents the geometry of the car. And so traditionally that's done using CAD models, which is another area in which the Program of Computer Graphics has played a role.
But we've done CAD models. And you represent a car, say, with lots of lots of little, little patches or surfaces. That's the level of abstraction I'll talk about.
So let's assume you have this very complex model. And often it's millions or billions of patches that represent all the geometry that you want to represent in a scene. So that's shape, that's one big piece of the puzzle. The next is, what's it made of? So that's the material part.
So often, say, you're designing a car, you'll have a widget like this that the designer will be playing with. And they'll let you attribute a different color to the car. So for example, nowadays if you want to go and buy-- this is an Audi-- if you want to go and buy an Audi online, they will let you try different paints and to visualize how the Audi will look before you buy it.
Car paints are actually very interesting. They have lots of metallic-- they're called metallic paints. And making them look right was actually a very hard challenge. One of the early pieces of work down here, again, at Cornell was by Professor Ken Torrance and a student, Rob Cook, who then went on to become a VP at Pixar. And they showed how to make things look like metals. Before they did this research, everything in graphics looked like it was made of plastic, like that yellow ball on the bottom left.
But what Rob and Ken did is they actually produced metallic models. And that's a nice vintage image over there. But all of modern materials that do that kind of metallic thing are inspired by the work that they did here.
So now we have the shape of the car. We have the material of the car. Is that enough? Not quite. As was famously said, let there be light. You need light to actually see things. The light interacts with these materials and the shape, and it produces an image.
And here is this car again, seen in two different lighting conditions. Here it is at dusk, and here it is in broad daylight. And it looks very different depending on the lighting. And so you do have to consider, what's the environment? What are the conditions under which the light comes together before you can produce an image?
So put it all together. Here are sort of the key players in computer graphics. You need to have some shape specification, some material specification, and some idea of what the light that's coming in, whether it's sunlight or indoor lights like this. And you take all that. I mean, an algorithm, a computer algorithm, takes all that, munches for hours, often. And produces an image that looks something like this. OK? Is that all?
Again, not quite. Because the image is only as good as it appears to a viewer. So a key player in all of this is the fact that there is a human being who sees the image. And it is that human being who judges, does this look fake, or does this look real? And if you can fake the human being out, that's when you know you've succeeded and you can call it a day.
So that's the computer graphics problem. I'm going to talk a little bit about the light simulation because that's something that we have done, Cornell has made big contributions in. And I'm going to use this particular scene to illustrate the problem.
Now you may ask, why this scene? It's a room with two boxes in it. It turns out this is a very famous scene. It is called the Cornell box. It has its own Wikipedia page. And it was envisioned somewhere in the '80s as a simple but useful example to try to understand how light bounces around in a scene. And they felt if they could visualize this and get it right-- and that on the right there is a picture-- then they were starting to make progress on figuring out how to solve this computer graphics problem.
So why is it hard? So let's just look at it. You have a box here. And by the way, there's an actual model in Rhodes Hall. You should go and check it out if you have the time.
So if I want to figure out what light is arriving at this blue point, I can connect up a ray from this point to the light. And I can do some computation of the physics of light based on the energy of the light, et cetera, the geometry of the scene. And come up with some number that represents the energy arriving at that blue point.
So you may say, that's not such a hard problem. And if I did that, I would produce an image that looks like this. Unfortunately this isn't quite the whole solution to the problem. And the reason is light doesn't only arrive at that blue point directly from the light source. What actually happens is light bounces around. And in fact, in a scene like this it can bounce around millions of times.
And there will be very, very complex paths. And all of those paths then converge at that blue point. And you need to compute all of that to produce the image on the right, as seen here.
So you can see it makes a big difference to get the global illumination right. And if you don't get it right-- and this problem is called global illumination because it's global-- then it won't look real. And if you do get it right, well, you have to spend a lot of time computing. That's the challenge.
So back in 1984, the Program of Computer Graphics-- this is Don and Ken and their students-- introduced a technique called radiosity, which was breakthrough in the results it achieved. One of these is a real image-- one of them is a virtual image and one of them is a real picture of the box. And this was back in '84 where, again, computers were very slow. It's kind of shocking that most people cannot tell the difference between these two. It turns out left is real and the right is virtual. It's very hard to tell just by looking at that.
And so this was breakthrough work. And in fact, it was adopted by many early games. So Quake and Descent, which are games invented by John Carmack, who is now the CTO of Oculus Rift, designer of the VR, used all of radiosity technology. And radiosity, in fact, became a core rendering engine in Autodesk products used for architectural walkthroughs and industrial design.
So that was a great start and off nearly 25 years ago, 30 years ago, even. And was that enough? Well, that was a good place to start, but radiosity had to make some assumptions. It assumed that materials are diffuse, that they aren't shiny, they aren't translucent, they aren't transparent. Unfortunately, the real world is much, much more complex.
So here are a bunch of photographs just from Flickr of people's homes. And you can see, there's a lot more complexity to the kinds of materials in the world. There's fabrics, there's glass, there's mirrors, metals, woods, ceramics. Adding all of this to the mix is the challenge that we've been solving since 1984.
So let me talk a bit about how lighting tracks with materials. So when you have a matte material, light hits it and just spreads out evenly. But for metals, for example, light might hit the surface and then bounce off in one direction or just a small set of directions. That's one of the challenges of computing light.
Translucent materials are very, very challenging to deal with. So what happens with translucent materials is light goes into the medium, bounces around, and comes out. And translucent materials are, unfortunately or fortunately, everywhere. The food you eat, your skin, glass, jewellery you wear. Surfaces all around you, the leaves there, they're all translucent.
And translucency, in fact, is very important because if you get it wrong, for example, in this sushi. If it's even slightly off how it looks, you won't eat the sushi. Because you as a human being use information about its translucency to judge whether it's fresh or not. And in fact, there's another example of a translucent material, which I'll talk about shortly.
So the importance of translucency, you can actually see it here on the top right. On the left is a picture of a woman's lips that without assuming translucency as a model, assuming just opaque materials. And you can see, it looks plasticky, it looks fake. And on the right is what happens if you actually simulate translucency. And it starts to look more like a real person rather than just a plastic doll.
And the work for introducing translucency to graphics was actually done by my colleague Steve Marschner, and that work got adopted by the entertainment industry. And it is in fact used in a lot of the virtual-- any rendering of skin pretty much uses their model. They won, in fact, a technical achievement award for this work. And in fact, the Program of Computer Graphics has had about 13 or 14 technical achievement awards over the years, with various of our alums getting them because of our contributions to this area.
So what I'm going to do today is talk a bit about some of these challenging materials and just try to give you an intuition for why they are challenging. And then what you might have to do to make progress in achieving true virtual realism. I'm going to pick two examples. One is white jade. And these are actually photographs as shown by the R. And the other is silk. We care about fabrics a lot. I'm going to talk about why fabrics are actually hard to simulate and what you need to get right.
So let's look at one of these projects. First is what makes jade look like jade. So this is a problem, actually, that I-- it's a collaboration with Ted Adelson, who is a perception psychologist at MIT, and Todd Zickler, who is a vision scientist from Harvard. And the three of us have spent about five years on this project trying to understand why white jade looks like white jade.
And in fact, the big motivator for us was this particular example. This is a photograph of a Chinese emperor's dragon seal. And it sold for 1 million pounds a few years ago. And we just loved looking at this example because it's exquisite. And you can see that there are these ribbons of material, of white jade, and these ribbons of light that flow along with it. And we could not simulate that in computer graphics. Computer graphics at that time did a great job of getting the base looking right but not the rest of the model.
So why does this look like it does? I'll give you the answer first, and then I'll tell you how we understood it. So the answer as to why it looks like that is because it matters how light scatters inside the material. So how do we characterize that?
So at that time we did not have access to white jade. We just were salivating at that beautiful white jade sample. So we had soap and we had wax. And so we used that as our models because we could get access to them on our research budgets.
And what we did is we took these two and we studied them. We had models made of this frog prince, but we also made blocks of these materials of soap and wax. And then you shine a laser at it. And you take a photograph from the top. And you try to look at that photograph to see what is the light doing inside these media.
So for soap, light scatters and produces this characteristic what we call the teardrop appearance. Whereas this wax, and it's a very special wax called parowax, which is used to polish floors. But it looks beautiful. And wax actually behaves very differently. The light goes into the medium and then it turns back. It scatters back, which is not typically done in the soap. And so we call this shape an apple shape.
And what you realized is if we could capture these properly, this scattering of light properly, then we are able to simulate these materials to look realistic. So the student who was involved with this project then went off and measured a whole bunch of materials. Mustard and milk and coffee, wine, soap, of course. And he then produced renderings like this.
This was a 3D scanned model of soap. And you can see that for the first time, we were able to actually produce this classic appearance of this light when you shine it from behind of this thin ribbons of light that come in these 10 features. And you may say, that's a lot of effort expended just on soap. But it was the most realistic soap, I think, that has ever been rendered. And in fact, we could do a broader range of materials. Olive oil, curacao, and milk.
So I've given this talk about this work before. And Joe and Pauline Degenfelder, who are in the audience right now, saw that work. And they very kindly-- Joe approached me and said, you know, I have jade. And so they very kindly have given us this model of white jade. So you're welcome to come and see it later. And we just got this model today. Our hope is now to actually-- now that we can go beyond our soap and our wax models-- we can actually measure real white jade and correlate that with our measurements and understand its property.
So you can see here, this is a particularly high quality of jade called mutton jade. And it looks beautiful. And we are starting to take measurements. We took a few in the lab today. We have some photographs here.
There's another inkwell that they gave us on the right. And you may notice if you look at that inkwell, it looks almost pearlescent. And we are trying to understand, why does it look like pearl when it is jade? So these are the kinds of questions we hope to answer when we continue this project. And this is the Science in Art Project, and we are very excited to be part of that.
So white jade is one example of a tough material that we're starting to make progress on. And hopefully now with real samples, we can even improve our understanding of the material more so. Another material that shows up a lot in real life, you care deeply about fabrics. You wear clothes, and you care about silk and velvet. And you're very good as a human being in distinguishing silk from velvet. Well, what makes silk look different from velvet?
So that was a question we set out to answer. Again, I'll tell you the answer first and then I'll tell you why it is. It turns out the structure of the fibers and the yarns that make up the material play a big role in the appearance of these materials. So how did we figure that out?
What we did is we collected CT scans of velvet and silk. So CT is computed tomography. In fact, it's micro CT. And so what does CT do? It uses x-rays to get density information of the material that you put into the CT scanner. So I'm going to now show you the kind of density information we get for velvet.
So on the top left you see the sample of velvet we have. This is a 5-micron resolution scan. In the middle you see slices of the CT scans. And on the right you'll see a visualization of velvet. And we'll start at the top of the velvet and work slowly toward the base of the velvet. OK?
So let's look at this here. You're seeing the tips of the velvet yarn showing up. And as you slowly go down, you can see the 3D structure pops out of velvet. And it's very characteristic. Velvet has this base and all of these fibers that are yarns that are poking out. So that's velvet.
Here's silk. Very different. Silk is very tightly packed yarns, woven very tightly together. In fact, it looks shiny for that reason. Velvet, on the other hand, because it pokes out, has this very classic appearance. If you wear a velvet jacket, you'll see its highlights. It's called asperity scattering. It's very typical. Let me just show you these scans again.
So here's velvet. Again, you see the tips of the velvet. And as you go down, you can see the 3D structure showing up. And on the right here is silk, which is very tightly woven yarns interlaced with each other. Once you have this model. And so these are micron resolution models. Remember, I was talking about capturing the shape of materials. They're talking about huge volumes of data to capture their appearance.
But once you have this model, you can take a photograph to give some optical information. And you can put that together to get a complete model. So here is our rendering of silk, of red silk. And you can see this is a virtual rendering, but you can see that the highlights look like silks. And this is the highest quality renderings of silk to date in computer graphics.
And then here is velvet. And as it spins around, you'll see the classic highlights that velvet has on the rim. So you look at it, and you know it's velvet. And you'll never mistake this for silk as a human being.
You may have also noticed, even though you were focusing on the video, that the time it took to compute each of these was 240 hours on one core of a computer, and the previous one was 500 hours. So this is not ready for virtual reality anytime soon but is there to at least give us good visualizations that hopefully we will then use and work on to produce interactive demos.
And in fact, you may have heard that nowadays you can walk into certain big department stores like Macy's and they will do a full body scan of you and get a virtual avatar. One of the applications of the kind of work we are looking at in this fabric rendering is to let people try out virtual garments instead of having to go in and actually try them out for real. So this is a big area that's actually a booming area that's coming up. And we're working with textile designers in the Rhode Island School of Design to get this technology out there.
So let's step back again. I've talked a lot about materials, you know, jade and fabrics. I've talked about shape by alluding to these micron resolution models. I've not talked as much about light. So how do you actually compute light efficiently? So that's one of the projects that we've been working on. This is actually a decades-long project. And I'll give you a hint of how you go about making some progress here.
So first let's look at what happens to light in complex scenes. Now I showed you this as an example of a scene that was earth-shattering in 1984. But of course, it's not very complex. The real world looks something like this. And what we want to do is simulate light in a kitchen like this one on the right.
So why is it so hard to compute light? So I'd shown you this earlier picture of light, where you trace the light out from the light source and you go and you hit some surface. But when it hits the surface, it actually sprays out in all kinds of directions. And I'm showing here three different directions that it might spray out in. But realistically, if you want to do a good simulation of light, depending on the materials you have, you might want to trace about thousands of rays outward in those light blue rays.
Now each one of those 1,000 rays goes forward into the scene, goes and interacts with other surfaces, which each in turn spray another 1,000 rays each. So you quickly can realize how the computation gets out of control. So in your first bounce you have 1,000 computations. And by a second bounce you have to compute 1 million computations. And by the third, you're at 1 billion computations.
So even though from 1984 computers have gotten 1,000 times faster, that's not quite tracking this billion-fold computation complexity that we need to manage. So why do we have any hope of making progress? So this is a problem, as I said, we've been looking at for a decade or so in my group. And I'll give you a hint of how we do it. Here's a model of the Kalabsha Temple in Egypt. And this is actually a cultural heritage application where people want to visualize how this temple looked back in its heyday before modern times.
So if you have a scene like this and you're trying to visualize how it looks. And this is joint work with Bruce Walter and several of my students. As I said, what you want is to really fool the observer. As long as the observer is fooled, you're fine. So if you look at that temple, if you actually zoom in on any piece of the temple, there's a lot of shadowing and shading that's going on. But at the high level, when you were popped back at this level, the observer can see all of that detail.
So the simple rule, and it sounds very obvious, but the simple rule is, if you don't see it, don't compute it. The rule is easy to state. To convert that into a computer algorithm that actually works, you need to be able to predict what you will not see, and then don't compute it. And that's where the magic goes into figuring out how to actually do that. So we've developed perceptual metrics. So we have some understanding how human beings will perceive the world, and we encode those perceptual metrics in our algorithms so that we can take shortcuts in what we compute.
Now here's that same scene. And now I'm going to show you it at a slightly different time of the day. So say it's nighttime now, and it's a foggy evening. And the light now scatters around. It turns out that fog is extremely difficult to compute because you have to light a translucent medium. The light bounces around in all kinds of directions. This is extremely slow to compute.
But as you've probably realized, if I look at that region again, as a human being, you really don't care about all of these billions of computations that are being done to compute this image. So Mies van der Rohe said less is more. This is a case where more is less. And more complex computations actually often overwhelm the visual system. And so you can take shortcuts that you could not do for the simpler scenes.
And this is a very counterintuitive result but plays a big role in computer graphics, achieving very good visual results for very complex scenes. The more complexity you add, whether it's trees, whether it's very highly textured surfaces, the less the visual system is finicky about the accuracy of the results. So exploiting that is something that we in our field spend a lot of effort and time on.
So we have built on this sort of high-level insight to now simulate all kinds of materials and complex scenes. Mirrors, metal, ceramics, translucency, glass, natural lighting. So that image on the right, these are all virtual images and they're not real photographs. And in fact, this technology that we've built, called light cuts, actually forms the core rendering engine of Autodesk products and is being used right now by millions of designers worldwide to design things all the way from big buildings, you know, in daytime or night, all the way down to designing a coffee pot for industrial design. So the same technology applies everywhere.
So does that mean we are done? So this is a visualization of that kitchen. And every time I show this to people-- I have it as my screensaver-- they say, oh, that's a beautiful kitchen. Is that your kitchen? And that's great. I'm glad they feel that way. But then when I show them the real photograph of the kitchen, you go, oh, it's not quite there, is it? So we still have a ways to go.
Here's a model that's virtually rendered. It's pretty good. It gives you a good sense of the space. But it doesn't look quite as good as a real photograph. And so we continue to do research to try to bridge the gap between these two.
So that's this area of computer graphics. There's another area that's sort of the sister field of computer graphics. And this is the area of computer vision. So I'm going to go back to that picture we had of how computer graphics connects virtual and real.
Computer vision is sort of the converse problem, or the opposite problem. Graphics takes us from virtual images and tries to produce images that look real. Computer vision takes real images and tries to infer information or models of the work in the virtual world.
So I'm going to use one example on recognizing materials to motivate why this is an interesting problem and why it's hard. So several of you might have seen a model. Several of you might have a robot like this called the Roomba. It's supposed to clean your house. We have one. It's more of a household pet than an actual-- it does clean, it does clean.
But say the Roomba is going around. It faces a very real problem when it hits new surfaces. It needs to figure out, what is this material that it's seeing? Now you looking at this scene can immediately tell the Roomba is on the rug. It just got off the wood. And there seem to be paper, lots of little paper pieces over in the rug.
But for a vision algorithm, those white pieces could be paper or they could be cream cheese or they could be something else. And depending on whether it's paper or cream cheese, the Roomba has to react very differently. So this problem of recognizing the materials in real-world images is something of great practical impact for robotic applications and also the augmented reality applications.
So for example, if you're bearing your augmented reality glass, what you'd really like to do is walk into a room like this and automatically have the computer tell you, this is a living room and these are the objects in the living room. There's a mat. There's a floor, chair, et cetera. And in fact, these are the materials they're made of.
Now maybe you can immediately recognize this. But a robot can benefit greatly from understanding, this is what the scene is made of. And if it's a cleaning robot, then it'll use a different cleaning type of device for the plastic chair than it will for the wood floor or for the fabric sofa. So scene understanding turns out to be a very important problem that we are increasingly seeing uses for.
But why is it a hard problem? You can do this. You can immediately recognize all these things. Why can't the computer automatically do that? So let me motivate a little why this problem is hard by showing you a little optical illusion that my colleague, Ted Adelson, designed.
So here's a scene on the right. And I think you look at it and say, this is a cylinder casting a shadow on a checkerboard. And if I asked you, there are two patches, A and B. Can you tell me which one is the lighter patch and which one is the darker patch? You will tell me, A is darker and B is lighter. Is that fair? Raise your hands if you agree with that. Good.
Well, it turns out that when you look at the image, though-- and I'm going to add two vertical lines here-- they're both the exact same gray value. There is no difference between A and B. Now I'll flip back and I'll show it to you. A is clearly darker than B to your eye. And yet, when you measure the image. I'm not playing. When you measure it, you can go and I'll post these slides online. They're exactly the same color.
So what's going on here? What's going on is your visual system, your brain is looking at the scene. It's making an inference. It says B is in shadow. So even though it's a certain, let's say, 0.5 gray, it's actually a very light material that happens to be sitting in the dark. And so B is actually quite white. A, on the other hand, is sitting in bright sunlight, and it's 0.5. So it's actually a dark material. Your brain is automatically doing this. But a computer vision algorithm has no way of figuring out that this 0.5 is different from that 0.5. And that's why scene understanding is a very, very challenging problem.
So Ted came up with this illusion. And lots of people have banged their head in trying to solve this problem. And it's one of the reasons why material recognition, understanding what materials there are in the world, is in fact a very hard and challenging problem. So why do we have any hope of doing anything here?
Well, so it turns out that we are at a very nice point of time. There's a ton of data out there in the real world that we can use to learn from. So on Flickr, there are millions of images that in fact we scraped of all the different types of materials that there are in the real world. And we created these large data sets of millions of materials. And once you have that amount of data, you can develop learning algorithms that then recognize materials.
And so in our research we've shown how you can use deep learning to achieve state of the art results. So now we are able to recognize materials like in this scene. Say, there's a fabric sofa, a fabric chair, a tile floor, and a wood table at about 85% accuracy. Is that great? It's better than the earlier 40% that people were doing. It's not 95% where you feel like, OK, maybe I've got this problem solved. Or even 100%. That might be a bit too hard.
So we're starting to make progress with all the compute we have. But there's still a ways to go. So let me sort of-- that's where we are in current state of the art. Let me say where I think we're going to go over the next few years and where I hope we can go.
So are we there yet? So science fiction has actually been a great inspiration for computer graphics and computer vision. And science fiction has given us a vision of worlds where virtual and real are completely indistinguishable from each other. Right from the Star Trek holodeck. But if you remember, the holodeck doesn't only look real, right? It feels real, it smells real, it tastes real. Everything in the holodeck moves real. And we are not remotely close to achieving all of that.
So there's a long distance that computer graphics still needs to go before virtual reality in the holodeck sense is a true reality.
So what are the kinds of challenges that we will have to face? For starters, you want to compute all of these images in real time. And the images that I showed you in 1984, those were computed taking hours then. And now computers are so much faster. But the kinds of images we produce now still take days sometimes to compute. And days is not something you can put on your holodeck goggles and walk around. So that's one big challenge, getting everything done in real time.
The second is actually making things look real. So this is actually a very interesting graph, and I'm going to explain it a little bit. Masahiro Mori introduced this thought experiment. So the x-axis here shows how close your image is to human likeness. How close does it look to an actual human being? And the y-axis, I'm going to call it likeability, though people have used different terms like familiarity for it.
So what happens is let's start at the bottom left. When you start off with something that doesn't look real, so I'm going to first ask you to look at the solid curve here, which is for static objects. Nothing is moving in the world. So if you have an industrial robot, nobody really gets warm, fuzzy feelings for an industrial robot. It's OK. We like it all right.
As you get to a stuffed animal, you're starting to get to a more human likeness feeling. You like it more. And you appreciate it more. But it turns out, after you go past that, you go into what is called the uncanny valley. So you get closer and closer to looking like a human, but not quite enough. So you start looking unreal and uncanny. You look like a corpse, or you look like a zombie or something like that.
And in fact, this is a very famous-- this uncanny valley is a very famous effect. Nowadays when they're designing robots, they worry about the robots looking close enough to humans but not quite right and this completely flipping everybody out. You don't want to deal with a zombie-like robot. You'd rather that it looked like an animal or a machine rather than looking almost human without quite being human.
So the uncanny valley, that's for, as I said, the solid curve is for everything being static. It's even worse when things move. And in fact, we saw that in that "Madden" football game. They weren't quite moving right. Something was not right with the muscles. And they poured in millions of dollars to scan those players. But we still have a ways to go before the motion is completely correct and the appearance is completely correct.
So what's the best way to get to the uncanny valley, get past the uncanny valley? That's the hard question that the field actually has to solve. One possibility is that you go all the way down and then you crawl your way up. That's a possibility.
The other is maybe you actually develop enough of an understanding of how humans perceive the world so that you might be able to jump over the uncanny valley. We don't know that we can do that. That's one of the hard challenges we face of the field.
So that's virtual reality and the problems that I think we need to solve before we can get to having full virtual reality or augmented reality. I think they're very exciting times. I tell my students this. We have the luxury of living in the best time ever. Technology is advancing by leaps and bounds. Every day we hear of new problems, like the material recognition problem is suddenly much more solvable than it ever was before. But it's not quite done.
So there are even harder challenges that are coming our way. And I think VR and AR AR will in the next 10 years unfold to really have great potential. And I can't wait to see the applications that will be enabled.
So these are all my collaborators. I've been very fortunate to have both great collaborators. Don Greenberg and Ken Torrance, who created the field of computer graphics early on. And Steve, Noah, Bruce, Todd, and Ted are my collaborators at various institutions. And on the right are all my students. They do all the hard work. So they're the ones who really get credit for all the work that I've shown you.
Out here I want to show you. There is a VR goggle. So at the end of the talk you're welcome to come and actually try it out. And here is this, my mutton white jade example too, if you want to come and look at it. I'm happy to take any questions.
SPEAKER 2: Over here.
KAVITA BALA: Yeah.
SPEAKER 2: Thank you. Thank you for talking to us. Two questions. About when was CAD and other computer systems for architecture. And about what year did you enable them, like a Frank Gehry building, to build things like that that never have been used without the aid of computers.
KAVITA BALA: So the Gehry building was-- sorry, not the Gehry. The IM Pei building was done in the '70s. CAD has been developing since then. In fact, CAD was one of the first computer graphics tools. Nobody was worried about making things look real. Just getting the shape was one of the hard problems.
And so in the '70s and '80s was when CAD started becoming usable. The '74 one, the IM Pei one, I think they did it manually before then. And about late '70s it was getting useful enough that you could use tools. But not the way it was in the '90s. The '90s had become much more seamless.
And in fact, one of the big things in the '90s is we started getting technology to do scans. So we would do 3D scans. So now you wouldn't have to go and implement everything when you're building a building. You can actually take scans and try to build models from scans.
SPEAKER 2: Can I ask you one more?
KAVITA BALA: Yeah, please.
SPEAKER 2: It's interesting. The micro 3D scans. What kind of applications are being applied to medicine from the [INAUDIBLE] of what you've done.
KAVITA BALA: Right. So that's a very good question. There's actually two kinds of research that go to medical work. So it used to be that volume rendering. So this problem that I said, the micro CT as an example. You don't just have surfaces. You have everything represented by volumes. And volume rendering was a core part of computer graphics. And actually volume rendering is what you need for medical applications. Because you have these slices, and you want to visualize them. You want to visualize anomalies.
There's two kinds of research that go in medical. One is the computer vision side, where you take all these slices and you try to see whether there are tumors or something or that there's anomalous behavior in that data. That's the side of computer vision. And in fact, Ramin Zabih, one of our faculty members here, works with the biomedical school. They're doing some nice work there.
The other side is designing things that are customized to the human body for a particular patient. And in fact, Don has done some work with the Cleveland Clinic on designing stents for people. So that's the kind of work. Those are the two kinds of work that Cornell has been doing.
There's a broader question of, how do you do these things fast? How accurate can you make it? How noise free can you make it? And that's a whole area of research, which is very rich and vibrant and lots of great work going on there. Go ahead.
SPEAKER 2: One more, I'm sorry one more question and I'll be all done. With the new center in Yo New York, Cornell-Technion.
KAVITA BALA: Yeah.
SPEAKER 2: Will that entity, I'm guessing, how much of what you're doing is going to be furthered with [INAUDIBLE] by the merging of Cornell and Technion at that location?
KAVITA BALA: That's a good question. I mean, I think a lot of-- as I see it, Cornell Tech can play a big role in getting the work out commercially. Right? That's one of their objectives, is trying to get it from the research labs out there. And I think that can play a very big role in all of our lives. When we sit here in Ithaca, there is a bit of a distance between Ithaca and all of the companies that are out there. I think they can serve as a bridge between us and the technological areas. So that's our hope, at least. Yeah.
SPEAKER 3: So my question is going to be, what sort of research is there right now? My question is, as you laid out all the different parts that you have to consider, one of the parts was the observer, the eye is seeing things. And, well, we know that as people age, they have illness or something like that, that what they see is different. So what sort of research is going on to say, well, a person with this particular affliction is seeing what's like this to them.
KAVITA BALA: Yeah. There's very nice research on that. Actually, Brian Barsky, who used to be a Cornell alum and has been at Berkeley for nearly 20 years, has been working on trying to simulate different visual problems. You know, myopia. And myopia is a simple one. But much more complex ones. Different astigmatism, et cetera.
So one is just simulated to train people. The other is to invert it, to try to see, if I'm given an image, how can I invert it, show it to a person who has that condition so that they see something that approaches normalcy. And that's a much harder problem. But there's a very rich body of research that tries to do that. And people keep making progress on it. But very nice work, actually.
They've actually tried to design-- I mean, model exactly all aspects of the visual system. Simulate the physics through all of that. And then try to reverse engineer and invert the problems that arise. And it's a rich area. And they tend to actually relate very nicely with medical applications, so they have strong ties to the medical community and the doctors, particularly, that do help. And I'm happy just to give pointers to the kind of research that's done there offline, if people are interested. Yeah.
SPEAKER 4: Water.
KAVITA BALA: Yeah.
SPEAKER 4: Water is by far, as I've seen from [INAUDIBLE], one of the hardest things trying to realistically because besides all the challenges in graphics, you also have the challenges in physics.
KAVITA BALA: Yes.
SPEAKER 4: Have you guys done any work in trying to make water look graphically feasible?
KAVITA BALA: Yes. Yes and yes. So I did not talk much about motion other than saying it's very hard. And actually, one of my colleagues, Doug James, who did do exactly water simulations. Water, there's been some really nice work out of Stanford. And now Doug's actually at Stanford. But Ron Fedkiw's group has done some of the best work on water simulations. And it's state of the art. And in fact, if you see movies, they do a pretty good job on water because they have these kinds of simulations.
One of the things that they focus on now is actually making water be more controllable. So not only do you want water to flow right, but also flow under some artistic direction. That's been a big area. But there are always new challenges. Water, fire, things like that.
All of these are examples of challenges that different areas of graphics have dealt with. And there's some good work there that we're starting to get to very believable results. What's harder is doing it real time in games. And we are still far away, I think, on the real-time aspect of that. Yeah.
SPEAKER 5: Speaking of video games, has there been any games I guess in recent [INAUDIBLE] that you've seen and just visually has just blown you away?
KAVITA BALA: I actually, well, maybe I-- I've seen the bad version of all of this. I thought this football game was startlingly good when I saw it. And I didn't describe how people do that. How do you get these players? They do full body scans and motion capture of these players. They put all kinds of markers of them and scan them completely. Their skin, their cloth, everything like that. I think those are examples.
Often with games, there are really two molds. You see the early, sort of the preview trailer, and there's some very nice effects there. But those are computed offline. And then the game has to take shortcuts because it has to be interactive. I think you just look at all the latest titles. They all look pretty good. And then next year they look even better. So that's just an arms race. And they're doing well. They're going along pretty well.
I think now their big challenge is now producing two images, not one, at interactive rates. That's the next big challenge. And they're all going for it. The current state of VR is a big part of it. Producing two images that are really beautiful and realistic is hard.
A big part of VR right now is actually putting cameras out there that are capturing live content for the virtual reality. And I showed you the five-year-old playing with that. That was a visualization of Iceland. And they flew a helicopter with a bunch of cameras hanging below the helicopter. And they captured-- it's just a gorgeous visualization of Iceland, actually. And it's real footage. And then you can go in and you can walk around and you feel like you're hanging over a gorge or you're seeing horses running below you.
It's worth playing around with that in that demo. I think that's one of the big challenges. But that's real content showing it to people. Virtual content will follow in a few years. Not quite there. Yeah.
SPEAKER 6: So computer vision trying to recognize all these materials. We're seeing coming to market spectrometer chips-- I mean, the [INAUDIBLE] company, consumer fitness, I think it's called. MIT got some [INAUDIBLE] or something. Why limit yourself to what the human eye can see in order to do that recognition and provide services?
KAVITA BALA: Good. Yes. So new technologies are looking in the IR domain. And I totally agree. Vision will, whatever sense that there is, if it's cheap enough, then we will get that data and use that. And there is some work in IR. Graphics tends to-- so these are sister fields, but they are not the same. They have slightly different goals.
Graphics tends to worry about what the human eye can see, and so it tends to stay very much in the visible spectrum. But yes, for scene understanding, I expect some more of those sensors. But we've talked about this for a while. It's still not quite cheap enough. It's still not quite ubiquitous enough.
Along those lines, the premise that all cameras will be 3D cameras. So why would you need to even try to understand shape anymore using traditional vision techniques? That story's been true for a while. But all cameras are not 3D cameras yet. And so there's still always this lag. Hopefully we'll get there. But we're not. And I agree. We would include all forms of data that we can integrate into our systems. Any other questions? Yeah.
SPEAKER 7: When you were showing the woman who was fixing the pipe.
KAVITA BALA: Yeah.
SPEAKER 7: When she was talking to that person, how did that happen? I don't mean to sound stupid.
KAVITA BALA: No, no, no. That's an advertisement.
SPEAKER 7: Was that a hologram?
KAVITA BALA: That's an advertisement from Microsoft. So that's how they think it'll play out. So for starters, that technology isn't quite there yet. That's the hololens technology. I showed it mainly to envision. So the idea would be, they're both sharing the same camera feed of the pipes, right? And then he marks it there. It will go to her goggles. And it will be overlaid on her feed so that she can see the arrow showing up. That's how it would play out. If it works.
SPEAKER 7: The other woman, when she was walking to the--
KAVITA BALA: Motorbike.
SPEAKER 7: --the motorbike, there was a-- the screen was in front of her while she--
KAVITA BALA: Right. So they're playing a bit fast and loose.
SPEAKER 7: Was she wearing--
KAVITA BALA: She had something on. She had the hololens on. And the premise-- what they're claiming they're showing us is a visualization of how it would look to her where they blended the virtual and the real. Somebody else watching the scene will just see this crazy woman waving her hands around. But she gets to see the overlay of the virtual on the real. That's what they're trying to sell us on.
And augmented reality is starting to get there. So it's not quite as beautiful as they made it sound because that's an advertisement. But that's what we're hoping to do, is overlay enough information in real-world scenes so that you can blend the two of them together and do useful things. And you see it particularly, the kind of AR goggles that you see for cars. I think that's very doable. I've actually seen demos of them showing things like visualization of a stream of traffic coming in or something like that.
SPEAKER 7: But are we close to doing, like, holograms?
KAVITA BALA: That's-- right.
SPEAKER 7: I was just wondering.
KAVITA BALA: You know, that's one of the-- the holodeck is the ultimate. I think we've had many false starts in graphics and vision. So every 10 years it comes around and we think we're going to do 3D for real this time. And then some really hard technological problems get us. And I think we're at a nice place right now where we've figured out some very core problems that used to hold us back.
So for example, in this stuff, in virtual reality, people always would get nauseous. That was a problem that they couldn't solve. And right now they've figured out cheaper and more accurate sensors so there isn't that lag between when your head turns and when the image goes. And because that lag went away, it's starting to become more real.
So yes. The holograms are sort of the next step. That's the whole point with the hololenses. They're claiming that we're getting there. But that's a bit more into the future. I think simpler AR, augmented reality, where useful things. The pipe thing might actually be something we can do more in the near future. Somebody else had a question. Yes?
SPEAKER 8: Would there be any way to make it actually feel like they're touching the thing?
KAVITA BALA: That's a very good question. Yeah, so there's a whole area of research called haptics. And in haptics you wear gloves. And I actually had a slide of that and I removed it because that's a whole different area. You're supposed to have pressure sensors on it so that when you touch something, you might start to feel some of that.
That field, again, goes cyclically. Right now it goes up and down. Right now there are these nice phantom gloves that are starting to give useful feedback. But it's still not quite to the level where you could feel silk and that would feel completely different from paper.
I've actually been talking to some researchers in the perception side. They're building some devices, but they're prohibitively expensive right now. So that's a bit more into the future than where we are now. Yes.
SPEAKER 9: When you simulate materials, do you take age into account? Because I see in these arm rests wood, but there's a lot of chips and things.
KAVITA BALA: Good.
SPEAKER 9: [INAUDIBLE].
KAVITA BALA: Yes. So aging is-- there's a whole line of research in computer graphics that actually simulates aging. So there are two ways to handle it. One is you start with a clean slate, like the IKEA catalog, and then simulate the kind of-- it's called weathering, actually. And different weathering simulations where the rain is falling on it. And in fact, Julie Dorsey, who was my adviser who was a Cornell alum, did some beautiful work on patination and stuff, which is still some of the better work in this space.
That's one way of doing it. So there are very, very detailed simulations to try different aging effects. The other possibility is that you take a camera and you capture it in the aged state. And then you capture it at different stages of aging and then try to blend it together so that you can get any level of youthfulness. All of these have been tried. But there remains more work to be done to make it even better. It's not quite perfect.
Often computer graphics, a big failing of computer graphics, it looks too clean. That is one of the known problems. And so you have special-- when they're making movies, they'll have artists paint in things like scuff marks and stuff to make it look more realistic. But that's a very good question. These are good areas. Yeah.
SPEAKER 10: Your personal thinking of where do you think the technology will be in 10 years? Like, what kind of technology do you think will be ubiquitous?
KAVITA BALA: I'm hoping that some of these applications. So the VR applications like having the sense of immersion in new places and stuff. That's going to happen, I think, in the next few years. 10 years from now, I think we're going to start to get to avatars being much more realistic. The kind of thing that I was talking about, I think challenges like touch and stuff. Those are much farther out. There's a lot of work we need to do in the hardware that you need to design before we can get there. But I think visually, we can get pretty close in the next 10 years in terms of VR and AR and scene understanding. Yeah.
SPEAKER 11: Can you imagine military applications [INAUDIBLE]? Are you enthusiastic about these, or are you [INAUDIBLE]?
KAVITA BALA: They exist, so it is not for me to have, you know. The world is what it is. So there are these games that are used--
SPEAKER 11: So you don't think that [INAUDIBLE]
KAVITA BALA: OK. So there's what the technology is and then what you feel about how it should be used and-- so my take on this is you separate the two. The technology does what it does. And then you have society decide through its value system how it wants to use the technology.
So as a scientist, it's important for me to push the technology where it goes. That's the right thing. And as a citizen of the world, I have an opinion of how I think technology should be used. But I think that has to be done through social mechanisms, not through scientists trying to sabotage science. That will never work.
Science has to do what it needs to do. And we as a society need to take ownership of the things we are inventing and understand the implications of it. And use it in the right way.
That's a partial answer to what we could do. It's a very complicated question. There is no easy answer to how we should be using. I mean, this is a broader question.
Technology is amazing. The things we are doing are astounding right now. But for every good thing, there is the bad side to that technology. The internet is wonderful. Without it, we would all be stupid and clueless, right? I mean, you can look up anything right now and you can appear intelligent just by looking it up.
By the same token, the internet is full of horrible, horrible lies, horrible stuff, which since there are kids in the audience I won't talk about. But you know it exists. And so does that mean we shut down technology? No. We as a society need to learn to use it responsibly. And I hope we will.
SPEAKER 1: This has been a production of Cornell University. On the web at cornell.edu.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact firstname.lastname@example.org if you have any questions about this request.
A major quest in computer graphics research has been to understand and virtually model the appearance of the real world. In this lecture--the last in the summer lecture series sponsored by the School of Continuing Education and Summer Sessions--computer science professor Kavita Bala talks about the university's pioneering research in this area in her talk "Virtual Realism and Computer Graphics" on July 29 in Call Auditorium, Kennedy Hall.