SiliconANGLE theCUBESiliconANGLE theCUBE
  • info
  • Clips
  • Transcript
Dr. Stuart Madnick, MIT | MIT CDOIQ 2019
Video Duration: 16:33
    search

    From Cambridge Massachusetts, it's theCUBE. Covering MIT Chief Data Officer And Information Quality Symposium 2019. Brought to you by SiliconANGLE Media. Welcome back to MIT in Cambridge, Massachusetts everybody. You're watching theCUBE, the leader in live tech coverage. This is, MIT CDOIQ The Chief Data Officer And Information Quality Conference. I'm Dave Vellante with my co-host, Paul Gillin. Professor, Dr. Stuart Madnick is here, longtime CUBE alum. Longtime professor at MIT, soon-to-be retired. (laughing) We're really grateful that you're taking your time to come on theCUBE, it's great to see you again. It's great to see you again. It's been a long time since we worked together and I really appreciate the opportunity to share our experience here of my team with your audience. It's really been fun to watch this conference evolve. We're full, it's really amazing we have to move to a new venue next year, I understand. (laughing) We talk about the data explosion all the time but one of the areas that you're focused on and that you're going to talk about today is ethics and privacy. Data causes so many concerns in those two areas. Give us a highlight of what you're going to discuss with the audience today and we'll get into it. One of the things that makes it so challenging is data has so many implications to it and that's why the issue of ethics is so hard to get people to reach agreement on it. We were talking to some people regarding medicine and the idea of Big Data and AI. So in order to be able to really identify causes you need mass amounts of data. But that means more data has to be made available as long as it's everybody else's data and not mine or not in my backyard if you will. So we have this issue where on the one hand people are concerned about sharing the data on the other hand there are so many valuable things we gain by sharing data and getting people to reach agreement is a challenge. One of the things I wanted to explore with you is how things have changed. You, back in the day very familiar with, and Paul you as well with Microsoft, the Department Of Justice, FTC issues regarding Microsoft and it wasn't so much around data it was really around browsers and (mumbles) things today But today you see Facebook and Google, Amazon coming under fire and it's largely data-related. Liz Warren last night again, break up big tech. Your thoughts on similarities and differences between the monopolies of yesterday and the data monopolies of today. Should they be broken up? What are your thoughts on that? Let me broaden that issue a little bit more, if you will. And I don't know the demographics of your audience but I often refer to the characteristics of millennials. The millennials, in general I asked my students this question here. "How many of you have a Facebook account?" Almost everybody in the class has a Facebook account. I said, "You realize you've given away "a lot of information about yourself." It doesn't really occur to them that that may be an issue. I was told by someone that in some countries Facebook is very popular and that's how they coordinate kidnappings of teenagers from rich families. They track them they know they're going to go to this basketball game or the soccer match. They know exactly where they're going afterward that's a terrific spot to kidnap them. So I don't know whether students think about the fact that when they're putting things on Facebook they're making so much of their life at risk. On the other hand, it makes their life richer more enjoyable. That's why these things are so challenging. Getting back to the issue of the break up of the big tech companies. One of the big challenges there is that in order to do the great things that Big Data has been doing and the things that A.I. are promising to do you need lots of data. Having organizations that can gather it all together in a relatively systematic and consistent manner is so valuable. Breaking up the tech companies and there are some reasons why people would want to do that. But also interferes with that benefit and that's why I think it's got to be looked at real carefully as to see not only what gain may be made by breaking up but also what losses or disadvantages we're creating for ourselves. An example might be perhaps if it makes United States less competitive vis-a-vis China, in the area of machine intelligence is one example. The flip side of that is Facebook has every incentive to appropriate our data to sell ads and so it's not an easy equation. Even ads are a funny situation. For some people, having a product called to your attention that's something you really want but you never knew it before could be figured as a feature. In some cases the ads could be viewed as a feature by some people a bit of intrusion by other people. Sometimes we used to search Google, right? We're all looking for the ad on the side no longer, it's all ads today. I wonder if you see public sentiment changing in this respect, there's a lot of concerns certainly at the legislative level now about misuse of data. But Facebook usership is not going down Instagram membership is not going down. The indication is that, ordinary citizens don't really care. I don't have all the data maybe you may have seen but just anecdotally in talking to people in the work we're doing, I agree with you. I think most people, It may be a bit dramatic but I was at a conference once and someone made a comment that there has not been the digital Pearl Harbor yet. No, there's not been some event that was just so onerous so awe-inspiring, that people remember the day it happened kind of thing. And so these things happen and there may be a little bit of press coverage and you're back on your Facebook account or Instagram account the next day. There's nothing that's really dramatic. I mean, individuals may change now and then but I don't see massive changes. But you had the Equifax hack two years ago 145 million records. CapitalOne just this week, 100 million records. I mean, that seems pretty Pearl Harborish to me. It's funny. We were talking about that earlier today regarding different parts of the world. I think in Europe, in general they really seem to care about privacy. United States, they kind of care about privacy. In China, they know they have no privacy. But even in the U.S. where they care about privacy exactly how much they care about it is really an issue and in general it's not enough to move the needle if it does, it moves it a little bit. How about the time when they showed that smart TVs can be broken into? Smart TV sales did not budge an inch. How many people even remember that big scandal a year ago? To your point about Equifax. I mean, just this week I think Equifax came out with a website where you can check whether or not your credentials were-- It's a new product (laughs) were compromised, and if there were-- And mine had been. As had mine, as had my wife's, as had Stu. So you had a choice. You have free monitoring or $125. And then we went, "Okay, now what?" Life goes on. It doesn't seem like anything really changes and we were talking earlier about your 1972 book about cybersecurity that many of the principles that you outlined in that book are still valid today. Why are we not making more progress against cybercriminals? Two things. One thing is, you got to realize, as I said before the caveman had no privacy problems and no break-in problems. But I'm not sure any of us want to go back to the caveman era. Because you got to realize that for all these bad things there's so many good things that are happening. Things you can now do with your smartphone that you couldn't even visualize doing, a decade or two ago. So there's so much excitement, so much forward momentum autonomous cars and so on. That these minor bumps in the road are easy to ignore in the enthusiasm and the excitement. As we head into 2020 here and the election it was fake news in 2016, now we've got deepfakes we got the ability to really use video in new ways. Do you see a way out of that problem? A lot of people are looking at blockchain. You wrote an article recently on blockchain "You think it's unhackable? Well, think again." What are you seeing in-- I think one of the things we always talk about when we talk about improving privacy and security in organizations. The first thing is awareness. Most people are only a small moment of time aware that there is an issue and it quickly pass of the mind. The analogy I use regarding industrial safety. You go into almost any factory you'll see a sign over the door every day that says, "520 days since last industrial accident" and then a sub-line "Please do not be the one to reset it to zero." Yeah, yeah. And I often say, when's the last time you went to a data center and saw a sign that said "50 milliseconds since last cyberattack." (laughing) Or data breach and so on. It needs to be something that is really front of mind in people and we talk about how to make awareness activities both in companies and in households. That's one of our major movements here is we try to make you more aware 'cause if you're not aware that you're putting things at risk you're not going to do anything about it. Last year, we contacted SiliconANGLE 22 leading security experts and asked them a simple question. Are we winning or losing the war against cybercriminals? Unanimously, they said we're losing. What is your opinion of that question? I have a great quote I like to use The good news is the good guys are getting better better firewalls, better cryptographic codes but the bad guys are getting badder, faster. And there's a lot of reason for that I won't dwell on all of them but we came out with an article talking about the dark web and the reason why it's fascinating. Is if you go to most companies if they've suffered a data breach or a cyberattack. They'll be very reluctant to say much about it unless they're really compelled to do so. On the dark web they love to brand and reputation "I'm the one who broke into CapitalOne." There's much more information sharing they're much more organized, much more disciplined. I mean, the criminal ecosystem is so much more superior than the chaotic mess we have here on the good guys' side of the table. Do you see any hope for that? There are services, IBM has one and there are others that sort of anonymize, security data enable organizations to share sensitive information without risk to their company. Do you see any hope on the collaboration front? I said before, the good guys are getting better the trouble is, at first, I thought there was an issue that there wasn't enough sharing going on. It turns out, we identified over 120 sharing organizations. That's the good new and the bad news. There's 120, so IBM is one and there's another 119 more to go. So it's not a very well-coordinated sharing that's going on. It's just one example of the challenges. Do I see any hope on the future? In the more distant future because the challenge we have is that, there'll be a cyberattack next week of some form or shape that we've never seen before. Therefore we're probably not well prepared for it. At some point, I'll no longer be able to say that but I think the cyberattackers and breachers and so on are so creative they've got another decade or more to go before they run out of steam. We've gone from hacktivists to organized crime now nation states and you start thinking about the future of war. I was talking to Robert Gates about this the former Defense Secretary, and my question was "Don't we have the best cyber? "Can't we go in the ovens?" And he goes "Yeah, but we also have the most to lose." Our critical infrastructure and the value of that to our society is much greater than some of our adversaries. So we have to be very careful and it's kind of mind-boggling to think. Autonomous vehicles is another one. I know that you have some visibility on that. You were saying that the technical challenges of actually achieving quality autonomous vehicles are so daunting that security is getting pushed to the back burner. The irony is, I had a conversation I was a visiting professor at University of Nice about 12, 14 years ago. And that's before autonomous vehicles were on the (mumbles) but they were doing what they call Automotive Telemetrics. And I realized at that time that security wasn't really our top priority. I happened to visit an organization doing real autonomous vehicles now, 14 years later and the conversation was almost identical. Now, the problems they're trynna solve are hotter problems than they had 14 years ago. Much more challenging problems and as a result, those problems dominate their mindset and security issues kind of, we'll get around to 'em. If we can't get the car to ride correctly why worry about security? What about the ethics, of autonomous vehicles? (laughing) We talked about that yeah. Your programming. If you're going to hit a baby or a woman or kill your passengers and yourself. What do you tell the machine to do? That seems like an unsolvable problem. I'm an engineer by training and possibly many people in your audience are too. I'm the kind of person who likes nice, clear, clean answers. Two plus two is four not 3.9 not 4.1. That's the school up the street may deal with that. (laughing) The trouble with ethic issues is they don't tend to have a nice clean answer. Almost every study we've done that has these kind of issues on it and we had people vote. Almost always it's spread across the board because any one of these is a bad decision. So which of the bad decisions is least bad? Like what's an example that you would use in your classrooms? What example I use in my class and we've been using that for about well over a year now in a class I teach on ethics. Is, you are the designer of an autonomous vehicle so you must program it to do everything. The particular case you have is you're in the vehicle it's driving around the mountainous Swiss Alps you go around a corner and the vehicle using all of its sensors realize that, straight ahead in the right-hand lane is a woman and a baby carriage pushing onto this. Onto the left, just entering the crossway are three gentlemen. Both sides of the road have concrete barriers. So you can stay on your path hit the woman and the baby carriage veer to the left, hit the three men take a sharp right or a sharp left hit the concrete wall and kill yourself. The trouble is, every one of those is unappealing. Imagine the headline "Kills Woman And Baby" that's not a very good thing. There actually is a theory of ethics called utility theory. That says," better to save three people than two or one." So therefore, you do not want to kill three men that's the worst. And then the idea of hitting the concrete wall may feel magnanimous, "Well, I'm just killing myself". But then, as the designer of the car shouldn't your number one duty be to protect the owner of the car? So what people basically do they close their eyes and flip a coin because they don't want any one of those answers. It's not an algorithmic response. I want to come back before we close here to the subject of this conference Exactly You've been involved with this conference since the very beginning. How have you seen the conversation change since that time? I think it's changed in two ways. First, as you know this is a record-breaking group of people we're expecting here. Close to 500, I think have registered. So it's clearly grown over the years but also the extent to which whether it was called Big Data or called AI now, whatever. Is something that was not quite on the radar when we started, I think it was over 15 years ago before we first started the conference series. So clearly it's become something that is not just something we talk about in the academic world but is becoming mainstay business for corporations more and more. I think it's just going to keep increasing. I think so much of our society, so much of business is so dependent on the data in any way, shape or form that we use it and have it. It's come full circle as Paul and I were talking at our open. This conference emerged from the ashes of the back office the Information Quality and then like you say the Big Data and now AI. And guess what, it's all coming back to Information Quality. Exactly, lots of data that's no good or that you don't understand what to do with is not very helpful. Dr. Madnick thank you so much for coming to theCUBE >> It's a pleasure your partnership for all these years really, I want to thank you for that. And I want to thank you guys for joining us and helping to spread the word. It's been our pleasure. All right keep it right there everybody Paul and I will be back. At MIT CDOIQ, right after this short break. You're watching theCUBE.