James F. O’Brien

james f o'brien  professor of Computer Science and Electrical Engineering at the University of California, Berkeley.

James F. O’Brien is a professor of computer science at UC Berkeley and has published numerous studies on destructive modeling, earning an Academy Award for his pioneering work in movies and games.

John:

Hi, I’m John with JetBridge. We’re an international group of elite software developers. And if you’re looking for offshore engineers that are just as talented and ambitious as those in Silicon Valley, check out jetbridge.com. I’m also here with my two technical co-founders Adam and Mischa. And today our guest is James O’Brien. James is a professor of computer science at UC Berkeley and chief advisor to Juice Labs, a startup we love. Professor O’Brien has worked with film and game companies on integrating advanced simulation physics into games and special effects. In 2015 the Academy of Motion Picture Arts and Sciences recognized his work in destruction modeling. I love that, destruction modeling with an Oscar for technical achievement. He received his doctorate from Georgia Tech is a Sloan fellow, an ACM distinguished scientist, and has been selected as one of technology reviews TR 100. If you’ve ever watched action movies or play video games that are maybe a little violent, there’s a good chance you’re a beneficiary of his work.

John:

All right, James, thank you for being on the JetBridge podcast. My first question is when I was a kid, I would spend summers building these like little scale models and after weeks of work, I would stuff them with fireworks and blow the shit out of them. And my mother never understood. She’s like, what, why did you put so much work into it if you’re going to blow it up? Why did you decide to focus so much of your time and energy on the realistic animation of destruction? Was it something that was kind of a childhood thing you enjoy like me or was there a higher intellectual purpose?

James:

Well, it’s certainly fun to create these effects of things blowing up. I mean, it’s, in some ways it’s very satisfying to, to walk something that is intricate and has a lot of detail and then watch actually how it comes apart in a destructive way.

James:

Of course, as your mother pointed out, the problem with that is that when you blow actually blow something up physically, then it’s gone, you’ve destroyed it. And certainly if you’re trying to do effects you many effects in movies, just simply aren’t practical, spaceships blowing up and so on. From a technical point of view, the thing that excited me about doing the destruction effects is that if you look at the simulation messages that are typically used in a well for simulations and meshes that are using computer graphics are rendering, they almost always work with fixed apologies and they don’t change the mess structure. But if you want to have something like tear or crack apart, then if you think about what cracking is, you have like a piece of glass, that’s a solid material. And now we introduce all these boundaries and cracks into it and modeling those requires changing the mesh.

James:

If you want them to look real, if you don’t want to look like if you want to look like Lego pieces coming apart, then you don’t need to change the metric. You use the existing meshes, but that’s not gonna be realistic. So if you want to change the mess to dynamically, figure out where the cracks are going to go change the mesh to accommodate those and allow them to propagate. It’s kind of a chicken and egg problem. How do you Remesh me? You don’t know where the cracks are and where the cracks are, where you don’t have the right mesh. And so that’s a hard technical problem. A lot of people at the time I started working on this back in the late nineties, sort of a, I had just talked about the problems for being interesting to people.

James:

And they said, Oh, this is going to be too hard. It’s not going to work. It can’t be done, variations of that kind of nay-saying. And I personally find that kind of negativity, very challenging. So I think that if you are able to find a problem that you’re interested in, it’s a hard problem as far as everyone else is concerned, but you think you have a good solution to it. That, to me sounds like the perfect place to be working because hopefully you’ll be able to do something that other people haven’t been able to do. I really liked the route, what we ended up doing in that project. And the fact that it ended up getting used in films and stuff has been very gratifying. But I think the most exciting part for me was maybe the first time when I actually got the code running on a real example, and then watch the one of my test materials actually tear apart those little, those little steps when you get the first thing working, they’re pretty exciting.

John:

So the short answer is you’d like to blow shit up. Yeah. But I don’t want to have to buy new shit. That leads me to my next question. There’s a lot of talk about deep fakes. There’s actually a startup out of Ukraine that has a deep fake app and their round is over subscribed and investors are going crazy. But when I think about deep fakes, I think about a potential dystopian future, right? I mean, today, any developer can download a tool and create a deep fake video, but it’s, it’s not so believable. Right. how close are we for that type of video to be undistinguishable from the real thing by your average nontechnical person? Like when can I go to tech talk and be Barack Obama’s face?

James:

I guess one place to start off with is tthe term deep fake is used to just today to use search generically mean anything that is using machine learning to create a fake or forged or artificial image or video. There’s actually a whole bunch of different techniques that are, that have been developed. Some of them have been developed by startup people trying to start a company and make something for people to use either for fun or maybe for other purposes. A lot of the work also originally came out of research labs at university, some work here at Berkeley, a lot of great University of Washington and the original purpose of all this was to create tools that could be used for generating productive content. Like if I’m making a movie Star Wars movie and Carrie Fisher unfortunately passed away a number of years ago, but I need to have princess Leia.

John

Well, now I can use the technology like this to have one actor act out the role and now replace it with a Carrie Fisher space. And that seems like a productive, good use of the technology using it to best merge a politician or attack someone you don’t like that I think we all agree as a pre negative use. And so that means we, as you point out, we want to be able to see if you’d be able to detect the fakes, build it, see if there if what we’re looking at is real, or if it’s a forgery someone’s trying to trick us. And right now it depends on the context of what’s being shown how well or how likely you are to be able to detect it. Like if you just have like a video, like what we’re kind of shooting right now, where someone’s just looking straight at the camera and staying relatively still, then I think right now you can produce things that most people and even a lot of experts won’t be able to tell is fake or is modified.

James:

If the video is something where people are moving around a lot, like two people wrestling together, or you have someone who use a lot of gestures and puts their hand in front of their face. A lot, those sorts of things are going to be opportunities where the algorithms can fail. The thing is that if someone’s really trying to perpetrate a hoax they’re not going to just take the output from the automated algorithms and use that directly. They’ll then load that into After Effects or some other video editing software. And they’ll fix all the little mistakes. And what’s happening here is it used to be, if you want to produce the video, the fake video, you had to be highly skilled to do the replacement. It was a very tedious time consuming effort. Now the machine learning is going to do most of the hard part.

James:

The part that a human would have found tedious and difficult, it’s going to do that stuff automatically. And now the human just needs to come in and clean up the little errors, the little glitches where they’re able to see that something went wrong. And so the real answer, your question is today, if someone wants to perpetrate a hoax, I don’t think they, I think there’s a good chance they won’t get caught. At least not until the hoax has been out there circulating for awhile. Eventually someone might develop a new algorithm that can detect that fake, or they might find a source material and say, Oh, here’s the original source material. So this other biggest, clearly a forgery that actually happened back when Jim Carrey was running for election with some photographs, there were some fake photographs put out, showing him on at a political rally that he hadn’t actually been to a Jane Fonda and the way those were disproved as they actually had the original photographs of an historical archives.

James:

So these forgeries can eventually be discovered, but typically not before they’ve done harm. And if we rely just like on the idea that you’re just going to look at a video and somehow be able to tell it’s fake, that’s not true. Even the experts can’t do that. And the experts, when they do do an analysis, they don’t just look at the video. They’ll load it up into a tool that lets them, reveal things that aren’t typically visible to the naked eye: do color model, modify the color palette adjust the histogram to try to bring out the types of errors that are in there. We do have algorithms can detect them, but the problem with these algorithms is they’re kind of in an arms race between the forgery and the detection, right? So if I, if I build an algorithm that, can, can create a deep fake, and someone says, well, I don’t like that.

James:

I’d like to be able to detect when this is happening. So they go and train an algorithm to a neural network or some other type of algorithm to detect my forgeries. Well, now that I see what they’ve done, I can now improve my forgeries based on studying what they did and now they can improve their detector baseline what I did. And now you’ve just got this arm race that goes on forever. So I think that the situation we’re going to end up with is as we go forward, videos, imagery, even 3d videos, stuff that even media that we’re not typically used to today, those are all going to be there, there’s going to be the potential for those be created that are fake. That will display things that are trying to mislead or fool us, manipulate us in some way.

James:

And we’re not going to be able to tell even with great detection technology, we’re not gonna be able to tell the time that released that they’re fake, what will happen is they’ll circulate, we’ll see them and they may be three weeks or a month or two later, somebody will come out and say, Oh, I finally proved this is fake. And of course, by that point, the election might have already happened. You might’ve already, petitioned for someone to be fired, whatever the, whatever the action you took based on that forgery will have already happened. And so I think the real thing that people need to do is not rely on someone to tell them the video is fake, but develop a really healthy sense of skepticism so that when they see something, they don’t just say, Oh, this tells me what I’d like to see.

James:

Like the politician. I don’t like here, they’re doing something bad. Clearly they’re awful. Now I’m going to get all upset about it and take some action. Be skeptical. Think about what you’re seeing and don’t believe the stuff that’s too good to be true. Because we’ve actually done some studies that looked at how people respond to forged images or how they look at images they might consume on the internet and how they determine whether or not to be suspicious that it might be fake. And the number one factor that we found was whether or not they agreed with the content. In other words, if I show you a video that someone you like and it’s something, and the video is negative, you will, you’ll be suspicious. If I show you something, you like a person you like, and the video is positive, they’ll be accepting.

James:

And if I reverse everything, there’ll be accepting the negative videos about people you don’t like, and skeptical that positive views about people you do. Like in other words, we just believe what we want to see, or we just believe what we want to be true. And if we want to be able to deal with the future where we’re going to have fake media all around us, there’s really nothing to do about it. There’s no way that we’re going to put that djinny back in the bottle. The way, the only way we can really, as a society, hope to deal with that in a productive way is for all of us to be skeptical about what we see. We know that facts can be made. We know that they’re getting easier to be made. And so that means when we see something, we shouldn’t just accept it as truth, no matter how much our eyes want to believe it.

John:

Well, that’s like the news now, right?

John:

You won an Oscar, but you’re not in the Hollywood industry per se. So maybe this isn’t a fair question, but do you think we’re that far off, how many years before? For example, we have Humphrey Bogart. Who’s long dead or Cary Grant star in a major Hollywood movie.

James:

I think if you wanted to do that today, you already could. And we have a lot of examples of characters that are very heavily CG characters, like Thanos in the Marvel franchise films, right? Thanos is a major character. There’s definitely a hero character, not in the sense of being a good guy, but a hero is, you know, you’ve got heroes, whether they’re called hero shots focused on the actor’s face or in this case, the CG character face. And, it’s based on the human actor who acted out the character, but then, obviously that face that’s a fantasy has been heavily modified. And we’re all pretty much accepting of that as being a major role in the film. You could have someone right now today could make a movie about Thanos, his backstory or something where he is the main character throughout in nearly all the shots.

James:

And that would be fine. You can look at many other movies where you have characters that, like in the the pirates franchise, I’m forgetting his name, the character that sort of has like an octopus space or the other people that are cursed to live with him, right. They all have a lot of CG on their face. So this stuff we can already do and we can make realistic shots of human look, more human looking characters, people that look just like me, or you don’t look particularly alien or special. We can do those shots today, but they’re expensive. And so right now, I think the main thing that was sort of limit doing a feature film with, or O’Gara was a leading role, would be that it would be very, it’d be very expensive to just throughout the whole film, recreate that, that face throughout all the shots.

James:

It’s getting cheaper to do that. And so if you modify your question to say how long until it’s something inexpensive and maybe even going step farther, how long until it’s something that, a student filmmaker could do in their garage or their bedroom or wherever they’re working on making their, their student film? Well, then the answer is we’re not quite there yet, but if we look at about five years from now and say, what would, how would the technology evolve? I think that’s about the point where you’ll start to see student work coming out of film schools or independent, people working with very small budgets, producing films, their work, their art, where they will be able to take the face of the actor that they use to film the shot and completely replace it with someone else or something else. And the technology be easy enough to use and cheap enough to use that will be available as I said, even to students.

John:

Wow. So Casa Blanca 2 is on the way in six years. That’s great

James:

There is one thing I want to add to that though. One of the big issues will be getting the acting like Bogarts right. You can have someone that looks like Bogart and that’s great, but if you really want the compelling acting to come across in the way that he was, he’s still famous today because of the amazing skill that he brought to the roles that he portrayed. And so that, creating that’s going to be harder. I suspect that we will start to see things that are going to be the acting equivalent auto tune. So, today, a lot of singers, they, they’re amazing on stage and they, they love their songs, but if you listen to the raw recording, you’re like, wow, this is all out of tune. What is this? And then auto two fixes it. Or in the case of, even a skilled singer use auto tune to pump it up a notch or something. I think we’ll start to see similar things being done with acting. So the speaking, the movement, these things will all start to be able to tweak them slightly so that they’re more whatever we consider. Good. And so I think that’s going to be very interesting also,

John:

Does that mean there’s cottage industry for recreating a deceased loved one or an ex?

James:

Maybe I don’t know how it is. Have you watched the TV show Black Mirror? They there’s one episode where they actually focus on this issue and it’s, it raises a lot of questions. So the answer to your question, do I think it will be eventually possible to at least recreate a lot of the like surface aspects of superficial aspects of someone like if your loved one passes away, I don’t think he’ll be able to have a deep conversation that you might’ve had before when they were live, but if you wanted to have the experience, maybe just having it just having them around to talking to, in your environment and, having a lighthearted conversation, maybe that’s something, I think that will be something that’s possible to fake in the future. We’ll see what people want it or not. I don’t know the answer to that. Maybe it will be very creepy and upsetting, or maybe it’ll be something that’s very comforting and reassuring. I really don’t know.

John:

I suspect it’s the latter. And it’s going to be a really interesting future, especially for software developers creating this stuff, which is a good segue into my next section tips for young developers. What advice do you wish someone had given you when you were first starting out your career in computer science?

James:

I think the piece of advice that’s good for everyone is always wherever your blind spot is. And in my case, when I was sort of starting off and I was actually still back in grad school, I really was focused on the technology, developing my programming skills, my math skills, all this stuff that would make me a good programmer, a good researcher, a good scientist. And in retrospect, I think that’s all very important, but the technical stuff’s really, really useful. And we all know when we go into a CS program, we know to focus on developing those skills, but the relationships you have are also really important.

Speaker 1:

And I think that building good relationships is one it’s a healthy way to live. But the other thing is when you actually go out, when you actually are starting your own company, you’re trying to find great technical people to hire. Or if you are looking for a really great opportunity, those networks are really how you end up finding the good opportunities. They’re really important. And that’s because part of the problem is if you’re trying to hire a superstar to lead your team, just looking on LinkedIn or looking at someone’s resume, doesn’t really tell you what you need to know the way you find that out typically ends up being, you ask through your network, do you know who is going to be a great person for this role? And that’s how you discover people. And so those networks are helpful, both when you’re looking for people to bring into whatever activity you’re trying to do. And also when you’ve got good skills and you want to be able to, not just have an average job, but really get into a great position, we’re able to really use your skills to do something great. Those personal networks really help with that.

John:

Yeah. Thank you for that. I, something that Nisha Adam and I talk about and teach a lot to the software developers we mentor is what we call soft skills. I think part of our soft skills trainings, don’t talk about networking enough. So thank you for bringing that up. It gives me an idea. What do you look for when you’re hiring a software engineer, either for one of your ventures or? for your research lab? What are some green flags of positivity and some red flags that you’ve noticed historically?

James:

It’s hard to say what the green flags are, because the things are when you’re looking for someone to fill a role, you need, they really need to bring a lot of different and they need to have strength in lot of different aspects that come together to help them fill that role. Right. A great programmer is wonderful, but maybe if it’s a leadership role, then he, or she also needs to be able to communicate well to their team. They’re, a lot, it’s not just one skill, right. They all come together as far as you need well-rounded people that have good skills that you need. So that’s sort of the generic green flags. The red flag question I think is a little easier.

James:

For me the biggest red flag, the thing that says, I just don’t want to work with this person. And, maybe they’re a nice person to hang out with, but I just don’t want them involved in my professional activities is someone who’s unwilling to admit when they are wrong and learn from that experience. And I’ve seen so many projects go sideways because maybe you have a really great person. Like they’re really technically very competent. They know exactly what they’re doing, everything. They’re just technically amazing. They’re also, maybe even they’ve got the soft skills. They know how to convince people their ideas, they know how to lead a team and get people to rally behind them. They know how to explain their ideas to management. So this is a great person, but if they can’t admit, Oh, wait, I was wrong about this, the actual right way to approach this, or a better way to approach us to go this direction.

James:

Then all those positives become a negative, right? If you have someone who is just flat out going in the wrong direction, but they’re great at proving to everyone else that they’re right. And sort of convincing everyone to ignore other facts, right? That inability to see when you’re wrong and revise your view of the world, that turns all your strengths into a negative. And so being willing to say, Hey, you know, if you’re in your meeting and someone points out something you didn’t think of, instead of saying, Oh, I’m going to like, pretend I knew that already. And stick to what I said, sort of just pull those ahead. That’s terrible. When someone says you, when someone, one of the bright people you’re working with says, Hey, here’s this new thing you can think of and be known saying, wow, you’re right. And that change, that completely changes my perspective of things. And now this is the new plan that that’s the sort of person that leads to success. Those are the companies that pivot and ended up becoming unicorns and going public. Those are the teams that develop the product that really changes the way people use your, you know, use your platform or whatever. Those are the successful stories. And if you can’t admit it can’t make you’re wrong. You can’t revise. Then eventually you’re just going to pull everyone over clip with you.

John:

I was almost scared to ask you this question, cause I know you got a PhD from Georgia Tech and now you teach computer science at UC Berkeley. But one of the things that I love about being your friend is that you always come from a place of authenticity. So let me ask you. In Eastern Europe where we have a couple of offices at jet bridge, getting a computer science degree costs a nominal amount of money, sometimes it’s free, obviously in the US can cost hundreds of thousands of dollars, it’s the US education worth that amount of money.

James:

That’s a tricky question. We were having network problems earlier, maybe I should say, Oh, my network’s not working and pretend I didn’t hear the question. It’s a hard question to answer because I think there are a lot of schools in the world where you’re going to get a great technical education. If you’re asking about a particular school compared to Berkeley, I don’t know, cause I’d have to know more about this school before I could really do a good comparison. So as far as there are a lot of programs in the world, both in the US and outside that don’t teach great technical skills that, you know, sort of just go through the efforts.

James:

And unfortunately, a lot of people come out of those programs, not really knowing how to program, they basically cut and paste code. That’s all I’ve learned. And I’m sure you’ve met people that have that, quote programmers who are in that category. And it’s, it’s very unfortunate that they’ve spent their money on a program that didn’t teach them what they need to learn, but among the programs that do teach what you need to learn that actually teach you the technical skills. There’s a lot of differences in what you’re trying to learn. Berkeley is kind of a cool place in the sense that if you go to school there, you’re not going to just learn the program. You can also take physics classes. And so you can kind of pull a lot of stuff together and develop strengths.

James:

Like my own, my own research evolves, a lot of bomb mechanics, a lot of physics and a lot of computer science and a lot of applied math. And so being at a university such as Georgia Tech, where those were all available to me, that was really helpful. And it allowed me to go in this direction. So I do think there’s value there. There’s also a lot of learning that happens outside the class. And so in any school, whether it’s at Berkeley, at Georgia Tech or, you know a school anywhere in the world, one of the things that I look for is, is how accessible are the instructors, the professors, and people who are teaching the material, how easy is for you to sit down with them and say, Hey, could you explain this to me more clearly? Or I thought this was true, but it doesn’t seem to be working, help me clear up that mistake, help me keep it.

James:

And also even when you are completely understanding everything, and maybe you’re the star student in the class, are they available to let you go beyond the class material? Like some of the best interactions I’ve had with students are students who’ve come to office hours, not because they’re having a problem, but just because they kind of wanted to ask about stuff beyond what the class was. And they ended up doing research projects that turn into publish papers, and they ended up working at NASA and places like that, where they really get to do very cool things. And I’d like to think that the opportunities that they had because they came after class and because I was able to be available to them and work with them, I’d like to think that that really helps them in their careers. I certainly think it’s a very valuable thing and it is one of the differences between I think a good program and maybe a not so good program.

John:

That segues into our next section, which is a little bit more technical. And speaking of authenticity, I wouldn’t have any, if I asked technical questions. Mischa is one of our co-founders. Mischa, I know you had a couple of questions for James after looking at and reading some of his white papers and research. Yeah.

Mischa:

You had an earlier paper where you looked at simulating cracking on surfaces. So like paint on ceramics, glass, dry mud. And then I saw that later, you worked on another paper with Pixar in Norway, stock oil, where you used the same kind of research you did, but in modeling seismic faults and you logical formations and reservoirs, what was the like process or the sequence of events that went from this seemingly unrelated domain on the surface to something you know, a totally different domain. And is that something common in your research?

James:

Yeah, I think it is common. All of us when we’re working in a technical field, we’ll start to develop a certain tool set of things that they know how to do. If you want to use the analogy that you have a hammer and, the old expression is when you have a hammer, everything looks like a nail and that’s sort of true. So from my perspective, in terms of doing fractured, simulations of fracture propagation, if I’m doing it because I want to model what happens when a Godzilla sits on a fictional building, or if I want to model how some materials in the Earth are subject to stress and form a network of cracks.

James:

They’re essentially the same problem from a technical point of view. Some of my work was even used and did a collaboration with a company that will splash, freezing strawberries, and they wanted to study how air flow in their coolers, their blast coolers would work. And that’s just another fluid simulation. So when you have these tools, especially if you get to a point, if you ever get to a point where you’re lucky enough to sort of have an exceptional set of skills, or you’re really good at one particular thing and start looking at what those skills can be applied to outside the area where you developed it. So with the geological stuff I met a geologist who studies these craft formations. He knows about them. He knows what’s relevant and what’s important about them, right?

James:

If you ask me, what’s important with, with geologists Paul Gillespie, and he knows about the context of these, these crack formations and, and how what’s relevant to the problems of flow through the crack of formations. What’s important about those that’s in his domain as a geologist, from my perspective, it’s another fracture simulation. And so through our collaboration, we were able to look at a problem. He’s not an expert on simulating cracks. I am by putting those facts, those two sets of expertise together, we were able to attack a problem that we wouldn’t have been able to work on separately. The same thing with the frozen strawberries. Right. I never would’ve thought of the problem of how important airflow around strawberries is to blast freezing them. But I met someone who was working in that area.

James:

They had a problem and I was able to apply the skills that I had to it. And I think that’s in general, collaborations, that’s another thing to sort of a lesson learned. That’s very important collaborating with others. I mentioned how developing those relationships with people’s important collaborations I think is also another part of it.

Mischa:

So there’s a lot of serendipity then.

James:

There’s definitely a lot of serendipity there, but also I guess from my perspective, a lot of things that seem disconnected maybe to someone else’s perspective to me are very similar, right? So I’ve done work on for example, taking laser scans and using the, to reconstruct 3d meshes. So and somebody might look at that work and say, wow, what does this have to do with physics simulation and, fracture propagation, well to solve the fracture propagation problem, we had to learn about rematching and working with meshes and building meshes.

James:

And it turns out if you have a bunch of laser points there from a laser scan, and you want to reconstruct a surface for that, a clean, smooth watertight surface that you could, for example, send it to a three D printer or using a simulation. Well, all those new measuring techniques now become relevant to that problem. So a lot of problems that seem from like an outside perspective or from a perspective of what the context is or what problem they’re solving, what their domain is, seem really separate or different. The underlying technical problem might really be the same.

Mischa:

Sure. Now I know that particularly in computer science, there can be a large Gulf between what’s been done in academia and what’s happening in industry. And what I’m wondering is from where you’re sitting, do you see anything that’s exciting that’s could be, you know really influenced the way we live changing society, perhaps that’s happening being researched right now, but hasn’t made it into the mainstream or to consumer so far.

James:

Yeah, that’s interesting. So I think the, I think the connection between academia and industry depends on fields and even within the field of computer science, it depends on sub fields or areas. So if you look at, for example, computer graphics, computer graphics historically had, it went through a period early on where it was all in academia, like computer graphics, this is like before Tron, before the star Trek wrath of Khan that used, both were early users of some graphics effects. So before that graphics was just this cool thing, people were doing aloud was making pictures and it was kind of esoteric. At some point we transitioned, maybe in like the late nineties or something, we started a transition to where we are now, where graphics is heavily used in industry.

James:

And, it’s pretty much ubiquitous in film for example, and, it’s all over video game, it’s everywhere. And if you look at the appeal that the area has changed over time. So today I think the separation between academia and industry and graphics is very small. A lot of the great work that gets published is actually actually done in collaboration with industry. So for example, one of my students, we have a paper this is a Steven Bailey’s work. He has a paper on animating faces using machine learning at that project. And that’s going to be in SIGGRAPH this summer. And that project was done in collaboration with researchers at Dreamworks and it was built, it was actually built using their proprietary animation system and it was built to help them on their next film basically.

James:

So, there’s no separation it’s actually right in there. So Paul DiLorenzo is the person over at Dreamworks who was collaborating with us and Dalton Nomans is the other student of mine that was also working on the project. So Paul is probably right now working on getting that integrated into Dreamworks pipeline. And that’s now the separation is basically zero. You have other fields such as machine learning where you see what’s happening, machine learning is, there not only is there very little separation academia industry, but I won’t say that they’ve kind of become the same. Like if you look at all the top places, like you go look at open AI and say, well, who’s over there working, it’s a bunch of faculty who have taken leave or taken, or they’re spending their time at both in the university and at these various industrial locations.

James:

There can be a lot of connection there. I think I don’t want to pick on any particular field, but there are other fields. I’m going to go ahead and mention programming languages. I think I have to say that. I don’t want to sound like I’m picking on that field. It’s actually a great and important field. But when you think about like most people who are writing code, they use whatever language is available, right. They might prefer a better language. They might have a preference on which language they use, but ultimately they’re gonna use whatever language is available. Whichever one has the right library supported, whatever one is being used at the company, they work at, they’re going to do what they can.

James:

So it seems like there’s a big divide between like the academic programming languages work and the in practice, what people actually do. But if you look at like if you look at C++, and look at the newer features that have come into latest version of stuff, and then you go back, several years ago, you see those ideas popping up in academia, and then they, slowly migrate and find their way into practice. And in the case of a programming language, that’s necessarily going to be slow because it takes time to, take an idea. So if you’re looking at the transition, for example from a very theoretical idea aboutmsome new language feature to when it’s going to show up in an actual language that’s widely used, like C++ for example or Python that’s being used in a lot of contexts, it’s going to necessarily be slow, right?

James:

Because you have to figure out what the feature is, what the new ideas, then you have to figure out how to integrate it either into an existing language or build a new language. And then once that’s done, you have to push it out there and actually make a commercial, an industrial strength compiler. That’s able to actually implement the feature in a way that people can depend on. And then you’ve got to convince people to actually start using that. And so there’s going to be a long delay, and I don’t think that’s necessarily like a fault of the people working in academia for like, looking to esoteric a problem or not caring about getting out to the world is just, that’s going to be a process that has to develop. Whereas other fields, you can come up with an idea and just immediately hand it to someone like, if I have a new way of doing a cool lighting effect in film, I can just go ahead and hand the code off to somebody, at ILM or Pixar and they could use it tomorrow. So, so different peels are gonna have different speeds at which stuff transitioned.

Mischa:

Gotcha. So tell me, why are humans so bad at shadows? Why are intuition so wrong when it comes to evaluating shadows?

James:

That’s a great question. I think that our intuition about isn’t just wrong about shadows. When we look at images in general there are a lot of things that could be wrong about the images that our visual system just doesn’t pick up on. And that’s one of the ways in which a lot of the tricks when people edit images andtake things out of the background or something, this is one of the way one thing is they rely on the fact that while we’re very good at understanding the content of an image, we’re not very good at finding problems with it. And this is kind of observation that is the reverse of what a lot of law computer vision researchers would say.

James:

So computer vision researchers trying to write software that will understand the content and image. And most of those researchers would very happily tell you how amazing our visual system is that we can look at a couple of pixels and say, that’s my friend, Bob, and, Oh, he’s got a new set of glasses or a new haircut. You’re like, how did you figure that out from, 16 by 60 pixels? How did you get that? Our visual system is really good at doing those sorts of things. And I think what’s one of the things that’s happening, the reason why we’re good at some of those things like that, understanding content, but we’re not good at finding the problems is there’s really no reason for, in nature for us to evolve that ability. Right? If you think about us living out in the real world, all the input is valid.

Speaker 1:

It’s like having a functioning, your code somewhere where, the call, the caller to your function is only going to pass the Git arguments. Why should you have error checking in that function? It’s just going to slow it down. When for a fact it’s always going to be called legit input. Whereas if you’re being called by something that you don’t know what it is, that’s untrusted, you would check the input to make sure it’s valid before you process it. RV you know, there was no reason for our visual system. If we go out in the world, everything you see is a valid input. So we’re good at understanding what’s there. We’re good at finding little things like, a tiger hiding in the bushes or whatever. We’re good at seeing things. We’re good at understanding what we’re looking at, but there’s no real reason why we should be good at detecting fakeness because it just doesn’t happen in the real world.

James:

So why would we evolve that capability? And I like to think of it this way. So you think about our visual system, whatever the input is, we project it into the space of valid things, and then we interpret it. It’s sort of like, I mean, maybe this is too nerdy of an example, but it’s like if you wrote some code that expects an unsigned integer to be passed in, and now some, some bonehead passes in a an assigned in a integer with a negative value. Well, your code’s going interpreted it as this is a positive onside integer. And, that’s going to cause some weird weirdness down the line, but what I’ll put it produces. And I think our visual system in a lot ways is kinda like, like that it wasn’t designed to take these weird images and stuff that we’re able to produce with Photoshop or render man or whatever your software is.

James:

It wasn’t designed for that. And so the result is there are a lot of things that can fool us into thinking things are real, that aren’t real, or that’s just the way our visual system works. And in some ways it’s cool because it allows for all sorts of artistic illusions and cool, a lot of great art, you one of the things that makes it interesting to look at is that it tweaks weirdness in our visual system, how we interpret images. But it can also then be used by people who want to trick us into seeing something that’s not there.

Mischa:

So I could turn the shadow quality down on my video game and probably be okay.

James:

When some ways like if you, if the shadows go the wrong direction, you won’t notice if the shadows though are doing something weird as they move across surfaces, then some things you’ll notice and some things you won’t, and it’s not always clear, the things you notice, aren’t always the things that are harder to do or easier to do.

James:

It really depends on what you’re looking at. I think about an image, for example, like if you, right now, we’re looking at video feeds to that’s how we’re all talking to each other, because we’re all different parts of the world right now. So we have these images and when the images get our lower resolution, they can start to look wonky pixelated, but we know that if you take a pixelated image and you send it through a filter that clean, that takes out the pixelization and smoothed it out, there was no extra information there, but something that looks better to us, right. What we can, the way we perceive artifacts and images. And just in general, the way we look at the world it’s not really a mathematical analysis, right? You can have an image and say, this is low res image.

James:

And the information, this image is exactly the same as this other image, but the fact that one looks blocky and pixelated, and the other looks smooth to us as a huge difference, even though technically they’re really the same image.

Mischa:

Now let’s talk about Berkeley real quick. So you’ve been in Berkeley a few decades. I grew up there. They used to be, well, there’s still this main drag near campus called Telegraph Avenue. And I remember when I was a kid, there are a lot of gutter punks there. It was kind of a, hippies kind of various lively shops, characters campus free speech was still a big thing back then. Now I feel like it’s been recently more cleaned up and commercialized and kind of some of the soul that was there, some of the, maybe grimy, but still a soulful and interesting environment there is kind of dried up a little. Do you have any perspective on that or any experiences on?

James:

I’ve been here for 20 years, and that’s when I moved to Berkeley to the San Francisco Bay Area. And I think you’re right. I think you’ve noticed something that’s definitely true, which is that as time has gone on, I think the San Francisco Bay Area has evolved into more of a kind of tech dominated monoculture. Technology is huge place of the culture here to the point where you’re like, if you just meet someone on the street and there’s a good con and you go meet a new person on the street that you’ve never met before.

James:

They’re totally random person in the city and you decide, Hey, this person’s cool. Let’s just go for coffee and talk. There’s a good chance that you’re going to talking about something related to tech. There’s actually a good chance that they work at Facebook or some other tech company. And I think that in some ways is great because it’s kind of nice to be able to share that common experience of programming and technology with a lot of people around you. It’s also maybe unfortunate because it, as you observed it kind of pushes out other things. And then when we talk about the housing, San Francisco housing is very expensive. We talk about those problems. A lot of those stem from the way the area evolved to, lots of people crowding into a very small area focused on one particular industry.

James:

And some of the negatives of that are that if you want to do something else here, then it may be, it’s not very easy. And also if you’re trying to build a tech startup, for example, or you’re trying to hire people in tech, and you’re not in Silicon Valley or a few other areas where there are sort of hotspots and it’s hard to find good people. I was dealing with a company that’s actually located in Connecticut a little while ago. And one of their biggest problems was finding people with the right level of skill they need that would be willing to move to Connecticut to work there. So this, this is real difficulty. It does create an opportunity for people that are a programmers and got some skills and willing to go to Connecticut, for example, you’d have some great opportunities that are there because of your willingness to relocate.

James:

I think that with this whole COVID pandemic situation, a lot of stuff’s moved to be working online. And some companies really seem to be embracing it. Others are already saying this is bad, and we want to go back to the way it was. A lot of workers love it a lot. And some people are complaining and saying they don’t like the isolation. So I think it’s gonna be very interesting to see how this evolves over the next few years. When hopefully this pandemic situation will be resolved quickly and can all have the opportunity to go back to normal life. But one of the questions will be, are any of the things that we learned during this pandemic, or they are there. Maybe we want to keep them when we go back. So maybe we’re more remote working is something that is beneficial and we can keep, and that would help in a lot of ways, or maybe we’ll decide we didn’t like it. And as a side, we’ll go back to the way things work. Who knows.

John:

It’s interesting. We did an internal poll at jet bridge and 60% of our developers said they want to work outside the office permanently. Personally, I miss our offices. I hope we go back to being together physically. Adam is our CTO and Adam, you have our three last questions.

Adam:

Hey. So I read your paper about multi-layer display, and then it got me thinking a little bit about how augmented this place will look like, and when they will be a commodity heads up displays that you could use, I don’t know, driving the car or driving a motorcycle.

James:

Do you mean like a heads up display that’s integrated into the car, or maybe like a comfortable pair of AR glasses you could wear while even while you were driving?

Adam:

More of like an AR glasses, something that’s small and portable, like the head of the space already in cars, but something that you could use that gives you that depth perception that is multilayer, this place gives you.

James:

This issue about having a three dimensional world that you see, so we are all familiar with static images or even moving images, but they’re flat on our screens. And when we put on a set of VR headset or an AR headset, we can now see in stereo, right. We see right eye and left eye images. And that gives us this 3d perception. I think what many people aren’t familiar with is that in addition to having two eyes, to tell us about death, because of the, you know, the stereo difference, we also, our eye focuses, right? And you just like a camera and we’re all familiar with this, right? If you need glasses like I’m wearing right now, or if I’m looking at someone close, the background is going to be blurry. Or if you take a photograph, we know about depth of field and things being blurry.

James:

It turns out that our eyes use the focus information to figure out as one of the ways in which they figure out how, how far away things are. For example, there’s actually a feedback between the mechanism that focuses our eye and the mechanism that causes our eyes to verge, which is when you, the two eyes focus on the same thing, right? You’ve got two eyes and they both look at something and those work together. And so the result is that, when we’re out in the real world, if we look at something, our eyes can very quickly lock onto that object and see where it is. And we have a good feeling for how far away it is because of the stereo and the vocal cues working together. Now, when we put on a VR headset, that’s not the case.

James:

We have two eyes, but everything’s being presented at one depth, the depth of the screens, the optical depth of the screen for the right and left eye. And what that does is if you actually are displaying things at different depths, it actually creates fatigue because your eye is trying to do what it normally does in the real world. So it’ll see something like from stereo, from the virtual screen, it’ll say, Oh, this is 10 feet away. But the virtual screen and the head an HMD is always maybe four feet away. And so it will try to focus the 10 and then come back to four. No, keep doing that over and over again. And that’s, if you, if you remember, in the old days, when 3D movies first came out, people would talk about fatigue and headaches and stuff. That was because people didn’t know back then.

James:

And so they’ve displayed that they would use all kinds of different depths to put things. And the screen was always the same distance away where 30 feet away, whatever the movie screen was, or if you watched on your TV, whatever that distance is. And that was fatigue because of this vocal fighting that we would be doing our visual system be doing. Today what they do is they know about this problem. And so what they’ll do is whatever, the thing that they think you’re the director or the person who’s putting together the film, I think you’re going to be focusing on that will always be put at the depth that equals zero disparity. In other words, where the two agree. So if you’re looking at the main character talking, then the main character we placed the zero disparity so that their distance matches the distance of the screen.

James:

And so now, since you’re focused on them, you’re not going to get that, those headaches and that fatigue. But if you don’t the movies, the content, the key content that they expect you to focus on is always put at zero disparity where the screen distance and the focus distance agree. But if you, maybe if you’re watching the movie for the third time, and you’re looking at other parts of the film where you just happen to, maybe you’re obsessed with a certain actor. So you watch the film, you only pay attention to that person, not what they thought you were going to be. Those fatigue effects would probably creep back in, because now you’re looking at areas that are not zero disparity, and you’re focusing on them a lot inside an ancient Dee, you’re looking around head Mount display, you’re looking around the world and the virtual displays always at the same depth.

James:

And if you really want to create the true feeling that you’re in a three D world, not just a fake artificial 3d world, you also need to get a lot of other things, right. But one of them is that you need to get the focus to use, right. And it’s a little hard to appreciate because most people don’t have can’t experience it, but in the lab, what a campus on campus at Berkeley it’s actually optometry lab. Marty Banks is another professor there that I’ve collaborated with inside his lab. He’s built this display device that allows you to actually display with multiple multiple focal depths. And the experience of using that device is as big a difference in terms of how realistic things look and how real it feels as going from like a single movie file, like a single movie image to going to stereo movie images.

James:

It’s as big a difference in how, how it feels when you look at it. And so if you, what, one of the things that’s actually cool is that they’re, they already aren’t, people already working on trying to build, build displays that have this, the the magic leap actually has two focal points. So when we talk about the idea of having different focal depths, I think most people that’s going to sound unfamiliar to most people and maybe, people, someone might not say, why is that important? If you think about the difference between going from a regular movie to a 3D movie, or between looking at a regular computer monitor versus putting on a stereo head mounted, display that extra feeling you get of suddenly seeing the depth, imagine that big a difference, but now gain this extra effect with the with the focus cues.

James:

It’s a really, it’s hard to describe until you try it and you suddenly realized, wow, that’s what it should look like. And there are displays out in the real world that, that are, that at least try to go in this direction. So the magic leap had a headset actually has two focal planes. And I believe originally their plan was to have more than two for exactly the reason that creates this, this feeling of immersion and, and, and reality of the depth of what you’re looking at. But it’s also difficult to build a display device, at least currently with multiple focal planes. I think it’s a question of time before someone figures out a way to, to actually do it and make it work and make it small enough to go into like, a headset or even better yet into something the size of the pair of glasses that you might wear comfortably every day. I think that’s just a question of time before it happens. And I think that that’s gonna push the level of realism we get with our displays. Another notch beyond what we currently call three to year stereoscopic views.

Adam:

And as I, I’m a gamer I a little bit, I’m a little bit not happy that the quality of animations in the videos is much, much better than in computer games as they have to be like real time. And there’s, it’s really visible that, in a video in the movies, they are like more real. When do you think that this gap will be smaller and can the edge GPU help to close that gap sooner?

James:

That’s a good question. So right now the main difference between either two things that cause a differences between real time graphics and offline graphics, right between games and movies, basically the real time stuff obviously has to run in real time. So it has to like whatever your computing power is available to you, that’s the limit. Whereas an offline graphics, you can use as much computer as you want and take as long as you want the other, the other thing that causes a difference in quality, or is that the online stuff it gets generated and you’re done, right. It gets renderend and in 60 second later, it’s on your screen. And that’s the, and then the next picture comes with a film. Each frame can be looked at by an animator. You can always have an artist or an animator come back and fix things up afterwards, right? So if you don’t like how something came out, you can redo it. If you there’s a small problem, you can touch it up. So that aspect of touch up and fixing things probably won’t ever be part of the real time experience for cabin, obvious reason that you can’t put shove an animator inside of your Xbox and trapped inside there. But the compute aspects I think are going to equalize, right now we’re getting to the point where there’s a lot of compute power, even inside laptops, inside a relatively small devices.

James:

And that just keeps growing. The limiting factor in a lot of cases tends to be power. Battery consumption and heat tend to be the limited cases and things like something you’re wearing on your head weight also how, you can build the device, but can you actually make it small lineup to wear on your head? And so what we are seeing already happening is a lot of techniques or taking the compute and viewing them out of the device that you’re wearing or the device that’s in front of you and moving to the mountain to the cloud, where you can bring a lot more computing resources available. So that, for example, as you’re playing a game just for the one hour that you’re playing, you might be using three different machines to create your, create your experience.

James:

And then when you’re done those go back into the cloud and somebody else uses them. At the one of the companies that I’m actually advising Juice Technologies is looking at a way of doing this specifically by taking the need to have a GPU in your machine and moving that GPU out into the cloud. So that for example, my laptop, I’ve got, I’ve got a little Mac book air, for example, the integrated GPU in that machine is pretty minimal. It’s really not great for playing games if I wanted to, but the rest of machine is fine. It’s got memory, it’s got a good CPU. There’s no, there’s nothing wrong with that machine. It just doesn’t have the GPU to do my games. If I’m able to take the GPU workload and move that out into the cloud, and in a way that’s seamless, completely transparent to the user, allow them to use their laptop or whatever device they have in front of them use all the resources that is available on that machine.

James:

And then pull in this extra resource that GPU computer, maybe a $4,000 Nvidia card that’s sitting out in the cloud somewhere that uses up, who knows how many Watts of power. And it’s just really puts out a lot of heat, something you could never put into a laptop today, but I can still access that and behave in my experiences as if it were inside my laptop. That’s a pretty exciting experience. And I think that tech techniques like that are going to start to close the gap between computationally, what can be done in real time and what can be done offline. You’ll still have the aspects of, touch up and, the human touch that won’t, that will be a difference, but the experience that you’re going to start to see online in online games it’s just going to keep getting better.

James:

And as a display technology gets better. We’re eventually going to end up with hopefully it’s situations where the feeling of presence of being immediacy are going to be very strong. And maybe that’s the solution to our, we’re talking about remote work and stuff. Maybe we’ll get there eventually where the remote experience becomes almost as good as the in person experience.

Adam:

Looking forward to that in real life experience with friends. I have last question. What does the one trueth about the technology industry or about teaching technology and academia that’s most people don’t know or would wouldn’t disagree with you on this?

John:

This is our Peter Thiel question, shout out to the great Peter Thiel.

James:

Well, there are a lot of secrets from a technical point of view, for example, I think as compute gets cheaper that explicit or partially implicit integration methods are gonna outperform implicit ones. And I think most people would find that a surprising statement. But it’s also probably not maybe globally, very interesting. The thing I spoke about earlier in terms of be willing to hear something that says you’re wrong, or just more generally hearing something you don’t like to hear and being able to listen with an open mind and then integrate that into your rule view, except that if you’re wrong, if you don’t think you’re wrong and you, you still listened to, but think you’re correct. Being able to articulate in a clear, without getting upset or, creating a fight or something, be able to articulate why you disagree with with whatever this new information was or why you think your view is still valid or still correct. And being able to learn from them as times when you make mistakes.

James:

I think those are all really important, but unfortunately I think that there’s like this a lot of people have trouble hearing things they don’t like. And so maybe it’s not a secret. I think most people at some level know it, but it’s a secret in the sense that it’s hard to put into practice and I’m not claiming to be perfect at that, of course, but both in terms of technology. And also, I think in terms of life, in general, being able to hear things that you don’t like understand what’s being said and think about them rationally. I think a lot of, technically I think that’s how you find solutions that you might not find otherwise. And as far as living your life goes, I think that’s maybe a way to avoid conflict. While still you can avoid conflict without being,, not being a pushover, not having no say in the world, but you don’t have to have conflict just for the sake of conflict.

John:

James, thank you so much for spending an hour with us. I learned a lot just by listening to the conversation. We really appreciate it. Thank you. And I hope we can do this again.

James:

Well, thank you very much. It was, it was great being here. Good talking to all three of you. Hope we can do it again sometime.

John:

And I have to shamelessly plug. We have a YouTube channel. Our audio video tech is going to place it right here. Thank you guys.