Developing Autonomous and Intelligent Technologies for Good with John C. Havens.


This week on the Tech Cat Show!

Speaker 1: Welcome to the Tech Cat Show with host Lori H. Schwartz. Each week we hear from established leaders in the technology and consumer industry. Finding out the scoop should never be this much fun. Now here is your host, Lori H. Schwartz.


Lori Schwartz: Hi everybody, and welcome back to the Tech Cat Show. It is a sunny day here in Los Angeles and we are getting into a topic that is really coming up everyday in my world, and that is artificial intelligence or at least a corner of the world of artificial intelligence. Everybody’s wondering, are robots going to take over and are all these scary science fiction movies really true? Are we going to be living under caves with machines taking over? That’s not exactly what we’re going to be talking about today but we’re going to get to the heart of this idea of, how do we handle what’s happening with all of this technology that is autonomous and artificial and intelligent?


So we have the fabulous John C. Havens who is the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and John’s going to explain what all of that means but just to give you a little background and some more fun initials, the IEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity, and so the benefit of humanity is going to be a theme in today’s conversation as we find out how we’re going to manage of all this.


Just a little bit more background on John. John has recently written in the last few years a book called Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, so you can see where some of this conversation is going, and also another book prior called Hacking Happiness: Why Your Personal Data Counts and How Tracking It Can Change the World, so hopefully we’ll get to hear a little bit more about those fabulous books, but ladies and gentlemen, let’s have a hand for John C. Havens.


John C. Havens: Hey.


Lori Schwartz: The crowd always goes wild.


John C. Havens: Great to meet you and-


Lori Schwartz: Nice … You too.


John C. Havens: Great to meet you and thank you for calling me fabulous. I really appreciate it, and one thing I should say. Everything I say today, just as a front end thing of course, these all represent my opinions and don’t necessarily reflect those of IEEE, but FYI, and great to be on the show.


Lori Schwartz: Wonderful disclaimer. Nothing I say represents anything other than the coffee that I just had. How’s that?


John C. Havens: Excellent. Well that same disclaimer.


Lori Schwartz: So John, give us a little sense of your background. I know you have a performing background, which I love, and you’ve done so many interesting things besides being an author and I know you’ve written articles for a number of well established magazines, both on the professional side of the world and stuff that consumers would read, but give us a sense of how you got to where you are today.


John C. Havens: Sure, and thank you so much. Definitely great for my ego, so appreciate that. I did. I went to New York in 1991. I was an actor for about 15 years. I was in all the unions. I did Law and Order and some Broadway and TV and film and then I fell into business. I did a lot of writing for different scripts where I’d be on sets for especially industrial films, and I did a lot of writing and then got into business development with a tech firm that led to being an EVP at a top 10 PR firm where I really learned about business. Then on a personal level my father passed away. He was a psychiatrist and my mom is still a minister, so writer, actor, and parents who were a psychiatrist and my mom is still a minister. That all means introspection, which has now led to ethics in the work at IEEE.


Lori Schwartz: Wow. That’s such a great combination of things, like a sandwich to make you who you are. Tell us about IEEE, just because, I mean I’ve heard of it but that’s because I’m always milling around in the technology space and I’ve had the privilege of interviewing other fine folks from that organization, but tell us a little bit about it so everyone understands.


John C. Havens: Sure. I am not objective. I’m a fanboy, so I’ll just say that upfront, and when I was in PR I mainly knew about IEEE through academic journals. I worked with clients like HP and P&G when I was still in PR, and a lot of times do the core research to get the really cutting edge stuff. Interestingly, and you would know this of course, and this is not to categorize anyone in any pejorative sense but a lot of times when you hear that someone is cutting edge, that can be very exciting and they are, but then what I discovered with IEEE is these academics who may have written a paper 10 years ago with stuff that I thought I had just discovered on my own.


So I had the privilege of speaking at South by Southwest about four or five years ago when there were a couple of people from the board of IEEE and I was talking about in my last book, and thank you for mentioning it. Heartificial Intelligence, the real need for a code of ethics for AI, and I went basically to pitch them as a consultant, because technically I’m a consultant with them, and said, “I think if anyone can create a real code of ethics of AI or something like it, that it will be adhered to at IEEE,” and then just to add more to that for what you just said, you know the scope of IEEE is massive. It’s the world’s largest technology association, but it’s the heart of the engineering community, which I’m not just saying that.


My brother-in-law’s an engineer and he told me when I first started working with IEEE, because IEEE was newer to me. He’s like, “It’s a given that anyone in any engineering college or post grad, it’s just a given that you either should join IEEE or you know who they are,” and so in that sense their influence is huge, and especially with regards to any sort of policy decisions that governments make that involve technology.


Now, as you just mentioned, AI is everywhere. In the EU, in the states. IEEE has become that much more influential, so it’s been awesome to work with them. Especially as a geek, which I am.


Lori Schwartz: Why do we need some sort of council on AI? Why is it so important to look at this from an ethics perspective?


John C. Havens: Sure, and by the way you’re kind enough to mention there’s the IEEE Global Initiative and I’m executive director of that, and then there’s a newer program the IEEE Standards Association which is part of, it’s an operating unit within the larger organization just launched called CXI or the Council on Extended Intelligence. That’s a little bit separate and I can give you some of the nuances of that.


Lori Schwartz: Yeah, yeah. Do. Do.


John C. Havens: But in terms of the initiative … Okay, cool. Alright. The initiative, meaning the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, that started about three years ago and at that time, and certainly at a personal level in my book, what I realized in interviewing a lot of people is it’s not like anyone’s out there creating technology going, “Cool, let’s kill people.” Right? No one’s like, “Let’s purposely be nonethical.” I mean there are, but the point is that’s not what most people creating technology are trying to do. They’re trying to create cool geeky stuff that can sell and give people value.


That said, what a lot of times people aren’t realizing with artificial intelligence or what we call autonomous and intelligent systems. I’ll mention that more in terms of the naming of stuff is very important, but the logic is how these technologies now affect human agency, identity, and data. There’s simply just new aspects of these technologies that literally humans haven’t dealt with yet, at least not at this level and intensity, so we created the IEEE Global Initiative is the shorthand, to do two things.


One is there’s a paper we have called Ethically Aligned Design and the second version is online for free. It’s a Creative Commons document. That’s a newer thing for IEEE but I really salute them for doing that in the sense of we purposely wanted to create a document in a multiple version way, but also we asked for feedback on both version one and two to say, “Look, it is too important to start to really prioritize how to think about ethical principles about these technologies. We have to put out a draft where it’s not the perfect final thing yet, but the reason it’s perfect in terms of the process is we’re inviting conversation about it and the Creative Commons aspect means if it’s going to help you to use any portions of this document right away, do it.”


Then the second thing that the initiative does is has provided inspiration for people from the initiative based on work of Ethically Aligned Design to go to the IEEE Standards Association, because even though we’re a program of IEEE SA, the Standards Association, it’s a separate area. Anybody can create or submit a project to IEEE Standards Association. You or any of your listeners.


So the logic there however is that our initiative was created as an innovation engine to think of, what standards do we need? And then individuals from the group went and now we have 14 what are called approved standardization projects. For IEEE at least when you create a standard it takes anywhere from two to four years when you start a working group until it’s actually released to the public, but these 14 standards working groups, they’re all free. Anyone can join. You don’t have to be an IEEE member, and they’re focused on 14 different areas of ethical considerations around these autonomous and intelligent systems. So I’ll pause there because I can tell you more but I know that was a long bit there.


Lori Schwartz: Well we’re going to take a break in a couple minutes, but I think for the people in our world who are all working at various businesses and who are all just consumers as well, I think they’re trying to understand. There’s so much fear about this, so is a lot of this about taking care of some of those issues, the things that we’re all afraid of that we’ll be taken over by machines?


John C. Havens: Yeah, very much so because like when I wrote my book Heartificial Intelligence, and you kind of nailed it. I’ve never necessarily been afraid of killer robots. I mean, I’m a fan of Arnold Schwarzenegger and all that, but I’m a huge … I watch everything. I watch Black Mirror. I watch Westworld. And I can totally claim, because it’s actually true that it’s like for work, because it kind of is. But the thing in my book that I explore that I have a greater concern about is the sort of usurpation or giving over on an individual level, and then certainly that would be reflected at societal level, not just the desire but almost the ability for introspection.


Meaning for an individual, the sort of decision to make of any technology that can do X for me, like I always use the example of maps. Not too many people are like, “Paper maps! I miss paper maps and fighting with my spouse.” In general, but there’s aspects of paper maps that no one will ever know about again, like serendipity, right? Like, I can think of any number of times my wife and I before GPS, because I’m 49, we get lost and you pull over at a restaurant and you have one of the best meals of your life. That’s happening less with GPS. It doesn’t mean it’s evil or bad. It’s just a thing. But literally GPS drives where you go, and more and more people are just, like whatever the voice says in your car. There’s been reports of people taking a left and going off a bridge, because you sort of, you’ve just given over full trust to this new thing.


Again, it’s not anti-technology, but the point is is that that’s just one skillset where now with things like affective computing, which is about the study of human emotion and the interaction of tech with it and the algorithms that drive it, there’s more and more opportunities for us to say, like for instance, a device that you might have at home that can read stories to your child. There’s more and more of these personal assistants at home.


I’m a parent. I sympathize with parents. Especially when the kids are young, you’re dying to get some extra sleep, but there’s more and more reports now of these devices maybe reading a story to your child and they have music and sound effects and multiple actors. It’s like an audiobook. But kids may start to prefer the machine read them stories versus their parents, and that means there may be, if we’re not thinking about these things, and again, I’ll say this 17 more times. This is not about the technology. It’s not about demonizing or fearing the technology. It’s about retraining us, and again my dad was a psychiatrist. My mom’s a minister, right? I lived in a world of introspection as a normal part of my life.


Most people don’t, and they don’t say, “What are my values? How are they manifested in the technology that I use and bring in my home? And how do I ask key questions about some of these things so where I don’t understand all the aspects of the technology, I can understand this is what’s right for me and my family before we bring it into our lives?”


Lori Schwartz: John, that is a great point that we can go out on this break, and when we come back I want to find out a little bit more about what are some of the trends surrounding AI so that we can come back and put that layer on top that you’re talking about, that ethics layer. So we’re going to be back in a moment with John C. Havens, and John of course is doing all sorts of fantastic work in the world of artificial intelligence around ethics as the executive director of the IEE Global Initiative on Ethics of Autonomous and Intelligent Systems. We’ll be back in a moment.


Speaker 2: When it comes to business, you’ll find the experts here. VoiceAmerica Business Network.


Speaker 1: The key point of contact between consumers and brands is technology. StoryTech, a boutique agency, empowers you to use that tech to deliver your message, engage your customers, and raise the bottom line. How do you track and exploit the trends? How do you stay ahead of industry disruption? And how do you maximize profit from content? From strategy to execution, the answer is StoryTech. Inform. Innovate. Create. Visit us at That’s


Follow us on Twitter at VoiceAmericaTRN. Get the lowdown on guests, new shows, and your favorites. That’s VoiceAmericaTRN.


Speaker 2: From the boardroom, to you. VoiceAmerica Business Network.


Speaker 1: This is the Tech Cat Show with Lori H. Schwartz. If you want to find out more about our show or to leave a comment or question, send an email to That’s


Lori Schwartz: Hi everybody, and we are back talking to John C. Havens, the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and we were just talking about this ethics layer to be thinking about when it comes to artificial intelligence. And John, you were going to talk a little bit about defining AI and some of the trends around it, just how you see it in your world.


John C. Havens: Sure, and thank you. I think it can get very complex, and we talked about this on the break. There’s a lot of different fields within AI, meaning there’s cognitive computing and there’s deep learning, et cetera, and machine learning, et cetera, and rather that go into all the nuances of those things, because also sometimes academics disagree on what those things may entail, which is fine.


I think for general listeners or geeks like myself even but who aren’t in AI, one thing I often talk about is the difference between autonomous and intelligent technologies, and again this is a very broad, general definition, but we’ve been used to, anybody who drives a car who’s got cruise control is used to the idea of autonomous technology, because in the car you get to 65 miles an hour, you press a button, you take your foot off the accelerator and it autonomously keeps driving without you holding your foot on the accelerator.


Intelligent technologies, again in general, there’s this idea that they can “learn,” and that means algorithms. These mathematical algorithms that you write with code are designed to, as it were, observe behavior and that could be watching, again air quotes, “watching” a million cat videos and learning what the cat will do in the million and one video because of how it learned, or under the hood algorithms. So anyway as a general distinction I often talk about the difference between autonomous and intelligent systems.


Lori Schwartz: It’s funny because I know that I have a lot of friends and they interchange all these different words depending on where in their life they’re dealing with AI, and so the geekier ones will say, “Well no, that’s deep learning, or that’s this, or that’s that,” but really the concept here is that we’re using technology to come up with an algorithm that we can then apply to things, and that’s where the autonomous piece comes from, correct? Is that a good simplification of it?


John C. Havens: Yeah, it’s a good way to put it. I mean, again, a lot of times people say artificial intelligence and they picture robots, whereas you know with your expertise, robots are the physical manifestation of the coding and the stuff inside, the software inside it, and so in that sense people tend not to realize that AI, most of it is not really seen by the human eye as it were, but yeah I think the distinction you made was great.


Lori Schwartz: Yeah, it’s funny because robots is just because of sci-fi, and sci-fi now seems to be coming true. I mean I just had a whole conversation with a bunch of technology folks about how Star Trek, everything that Star Trek talked about, especially with holograms and transporters and virtual worlds is all actually happening now, and so when you think about robots in science fictions and AI taking over, it just seems like a natural thing that, okay, we talked about that in dystopian futures, but everything else is coming true so why wouldn’t that? You know? Why wouldn’t that, John?


John C. Havens: No, it is. Robots are going to kill us. Thanks for the interview. Not at all. Well, let me give you an example first just to get to the ethics stuff, because first of all when we talk about ethics, understandably sometimes people think we mean the word “morality,” because they can be synonymous depending on the circles. That is not the case here, so want to be crystal clear that me as John, because again I’m speaking for myself, same with IEEE, dictating morals is not the point.


There’s a methodology, a design methodology called value sensitive design, which really means that you are actively asking more questions about the end user and how they’ll use your product from a cultural values or a cultural ethical framework then you might when you’re just kind of thinking, which is the concern sometimes, well there’s a bunch of people, maybe younger white guys creating technology in Silicon Valley, although I’m not trying to demonize anybody. I’m just saying that’s true, right? Meaning where you’re based and who you are influences how you design, but I’ll give you an example.


There’s a guy named Edson Prestes, and I love saying his name because he’s brilliant and I love him. He’s based in Brazil and he was one of the first people who helped me think about value sensitive design with regards to robots and AI at large. He said, “Look, if you’re going to build a robot, a physical manifestation, a robot thing, and it’s got what looks like a human head, even if it’s not meant to look like an actual human but it’s got a head and eyes. If when you release it in the states the eye-type things are designed to look at the human who owns the robot in their eyes,” that makes sense. Because in general, Westerners, at least Americans tend to, we look in each other’s eyes as a sign of respect.


But he said, “If you send that same robot and you’re designing it for the market of say China,” and again in general a lot of Asian cultures, it is a sign of deference to not look in someone’s eyes. So right there I didn’t talk about utilitarianism or the tunnel and is the car going to kill the girl [inaudible 00:20:59] and all that stuff. This is simply, how are you building what you’re building, and have you taken the time, and this again introspection in one sense, to fully understand the cultural values of whom you’re building for? And it doesn’t mean you necessarily always defer to their values say, for instance, if there’s a value of someone who may be overeating then you don’t build a sugary treat to have them eat more, but the point is is you have to know what those values are and that’s a lot of what we’re doing with what we call ethically aligned design.


Lori Schwartz: You know, it’s so interesting. Just last night I was at an event at JPL, the Jet Propulsion Laboratory here in California in Pasadena with NASA, and they were showing us a variety of the different satellites and things like that and the different team members that participated in different missions and how emotional they get and attached they get to the different satellites. How long they’re supposed to last, when they crash, why they haven’t heard from them.


I had no idea that thinking about people having such an emotional connection to these machines, but there’s AI in these machines too and they start to personify them and they name them, and the teams get very emotional about them. So that’s a whole nother area too, right, is how we connect with our devices?


John C. Havens: Oh yeah. No, that’s a term called anthropomorphism which I’m sure you know, but it’s something that we’ve all been trained to do as kids. When you’re three or four and you hold your teddy bear, you call your teddy bear whatever name, and you actually believe it’s real in one sense. If your mom says, “Put your teddy bear down,” you might still throw it off on the side of your bedroom, but the point is is that there’s that real kind of imagination layer, and anthropomorphism is very powerful, and you bring up a great point in terms of ethical concerns and in this there’s a lot about disclosure for say like a senior who has a companion robot in their home.


It’s more a sense of disclosure is sort of a way of reminding a person, and again this has to do with a person’s family, their cultural values, et cetera. It’s not that the attachment itself may be a concern, because again that sort of happens. It’s more the sense of, what could happen where the designer may know the anthropomorphism is a natural occurring thing, and this is a term called robotic nudging which is something we’re focused on in a couple different standards projects for instance.


Robotic nudging can be very powerful in a good way, and the equivalent most people would know is if you have a Fitbit. It’s not a robot, but you’re wearing something on your wrist that essentially is you saying to yourself, “I’m going to get more healthy.” So if you get a little text or something else that is sort of saying, “Go run,” or if it’s a diet app and it’s like, “Don’t eat the cake,” right? Maybe annoying, but the point is is that with disclosure, you know how your data’s being shared. You have said, “It’s okay to nudge me.”


That’s very different than a companion robot where part of the design might be that through this anthropomorphism, the designers who would be in this case nefarious for lack of a better term, would design it knowing, “Well the senior is going to be more open to suggestions of things to purchase, to have her use her credit card in a certain way,” and this is the type of thing for table stakes that we’re really working to help with standards and principals et cetera for the industry to understand, your intentions may be good. Intentions, like in a plane for instance, no one gets in an airplane, or no one would say this to you but they’d be like, “Hey, we haven’t checked with the FAA for safety things, and there’s no black box so if the plane crashes, you know, we won’t know what happened to it.” Like, no. 20 years, 30 years ago someone said, “We must have accountability and what’s called traceability, so if these things not just crash but how they function well, will know.”


But we’re still in an era where a lot of the autonomous and intelligent systems are getting built so fast in ways that people don’t know yet how they’re going to respond to humans’ interaction directly but a lot of these things have been created and a huge message for us is you can’t advance technology for humanity, which is IEEE’s overall tagline, in a vague way. And it’s engineers. You have to be specific, point by point, and that’s where you both protect and do safety and risk stuff, but more importantly you can really understand people’s values and honor them where again, they honor human rights and all that.


Lori Schwartz: Right, because you have to approach it from a place where people understand from the get go what this is. You can’t come at it from high science if it’s just a regular person, right? So you have to speak in their world. And I know I want to talk a little bit with you about the CXI in general. The council and like, who are you looking for to come join it? Because I know you have a really impressive list of members right now and are looking to expand it, so we’re going to be back in a moment and I’d love to hear about some of the members and how you’re going to build this out, and then talk a little bit too about how people can learn more.


So we’re going to be back in a moment again, with the fabulous John C. Havens. I know you like being called fabulous, and John is filling us in on an ethical approach to artificial intelligence as the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. But we’ll be back in a moment with John C. Havens getting more on the trends around AI and how we’re going to approach this as a global society. We’ll be-


Speaker 2: When it comes to business, you’ll find the experts here. VoiceAmerica Business Network.


Speaker 1: The key point of contact between consumers and brands is technology. StoryTech, a boutique agency, empowers you to use that tech to deliver your message, engage your customers, and raise the bottom line. How do you track and exploit the trends? How do you stay ahead of industry disruption? And how do you maximize profit from content? From strategy to execution, the answer is StoryTech. Inform. Innovate. Create. Visit us at That’s


Speaker 2: The business community’s first choice in Internet talk radio. VoiceAmerica Business Network.


Speaker 1: This is the Tech Cat Show with Lori H. Schwartz. If you want to find out more about our show or to leave a comment or question, send an email to That’s


Lori Schwartz: Hi everybody, and we’re back with John C. Havens who is the executive director of the Global Council on Extended Intelligence. John, who are members right now of this council?


John C. Havens: Sure, and thank you for asking. I’m actually executive director of both this and the program within IEEE, so just it’s great because I get to drive both of them. Some of the members of the Council on Extended Intelligence are people like the amazing Joi Ito who runs the MIT Media Lab. There’s Konstantinos Karachalios. He’s the managing director of the IEEE Standards Association. We have a lot of amazing IEEE people involved. Actually we’re honored that our executive director, Stephen Welby is a member.


And then a whole slew of amazing people, so we have Jeffrey Sachs who is very well known as the chief economist. Number of amazing women as well by the way. It’s pretty good. At least I think 60/40 male/female split because we really want to be focused on diversity. People like Katryna Dow who runs an amazing organization called Meeco which is focused on sort of an evolved or sovereign kind of block chainy way of thinking about data. We have two different people from the European Parliament, two people from the UK House of Lords. One of those is named Baroness Beeban Kidron. She is phenomenal. She’s also a filmmaker. An amazing group of people.


Lori Schwartz: If you were to succinctly give a singular goal for the council, it’s really about looking at applying an ethics layer to all of this. Is that the big, big, main goal?


John C. Havens: The ethics is more for the Ethical Initiative of the IEEE, and by the way they’re very closely aligned so this is understandable. The nuances that are … Thank you for asking is my point. The council, a lot of it, and you kind of hit on this in the beginning of the show, is reframing the narrative around how we’re thinking about these technologies, because you said it really well by the way. Most people don’t get this in my opinion, meaning in the tech/media world, which is the fear. You know, people can kind of move past like, okay, there’s a sci-fi movie, and I would rather not get shot by a robot. That would be bad. But I think the fear is actually ignorance in the sense of not stupidity but people don’t understand the nuances of how the tech works. Meaning, if you don’t know how it works, then that means you can’t really fully understand it.


Then secondly a lot of times what happens is people who are creating technology, I kind of equate it to like musicians because I play blues guitar, and on stage when you’re creating you may know that you’re playing in the key of A but when you’re improvising you won’t necessarily know exactly what notes come out. So algorithms that are designed to create something somewhat new, it’s a real challenge and I have a huge heart for developers and programmers. When you say to them for instance, “Hey, what you’re creating has to be traceable. We have to know why it did what it did.” That is true. I believe that. That’s something we’re committed to. You can’t just say, “Well we’re going to create it and not know what it does.” That said, it’s an amazingly complex challenge that these people are helping create, but anyway to go back to the council. A lot of the reason now, and we don’t both in the IEEE Global Initiative, we don’t use the term “artificial intelligence.”


As I mentioned before, we call it autonomous and intelligent systems, and with the council, and we don’t use the term “artificial intelligence” in the entire Ethically Aligned Design document where we’re writing original content. If we’re referencing something else that uses it as a title we do, but the council. The whole thing about the term “artificial intelligence.” There’s a couple things. One is, you mentioned this earlier. It’s a very broad term encompassing many different disciplines, so there tends to be confusion from people who build it because they’ll be like, “Are you talking deep learning, or what are you talking?” However, also there’s the sense of the negative aspect of how the killer robots and all that that’s been associated with artificial intelligence, and sometimes that can come from there’s a mindset of what’s called computationalism. It’s essentially kind of a philosophy. It’s the idea that if we’re able to, and by this I mean if someone scientifically is able to copy Lori’s brain, right. If I can copy Lori’s brain, all the neurons and parts of it, and then upload your-


Lori Schwartz: Oh, why would you want to?


John C. Havens: Well, because you’re fabulous as well, and I’m sorry this answer’s a bit long, but if I can copy you then essentially outside of the physical form of who you are, I’ve copied you. We’re done. Versus the idea that spirituality, emotion, aspects of who you are in terms of your relationship to other people and the environment. This is the extended part of intelligence that’s key to our name, meaning the Council on Extended Intelligence. If one thinks that it’s just about the mental acuity of a person is all that a person is, our answer is, look, maybe that wasn’t the intention of people creating it, but if you design technology with that as kind of a core philosophy, then the logic is this us versus them, machines beat humans. It’s not just a narrative. It’s kind of the imperative, right?


Lori Schwartz: Mm-hmm (affirmative).


John C. Havens: Oh, this technology just beat a human doing X. I for instance, I’m speaking as John but this is one of the reasons I’m thrilled to be executive director for the council. I get genuinely sad when I see those headlines, because somebody somewhere is getting sad because it wasn’t, “Look at this awesome, glorious technology that’s able to achieve whatever.” Maybe it’s because it’s link bait, but the title is often X Beats the Human. Well that means it’s obviously going to reduce or diminish the human while also framing the technology as our enemy, and that’s why it doesn’t have to say killer robots are killing us.


The “This beats this” mantra, that’s probably right now as today, that’s the key thing we’re trying to change, and changing that narrative means we can’t just say, “Hey journalist, please don’t use this as a title.” It’s getting to the heart of understanding. Humans aren’t broken and in need of fixing. That doesn’t mean we’re perfect, but it doesn’t mean that machines are going to come in and make humans perfect, and that’s not what they’re designed to do. Anyway, but I’ll pause there because I know I get all soapboxy.


Lori Schwartz: No, no. It’s good. I mean, I think one of the things I was going to ask you is, and this may come from the side of the Global Initiative as opposed to the council, but are you guys at all actively outreaching to Hollywood? And the reason that I say that is because so much of the average consumer’s perspective on all of this is coming from Hollywood, and if you guys were partnering with storytellers to communicate the truth about this and to give that ethics layer, maybe we’d all be walking around a little less nervous. You know?


John C. Havens: Well, I know there are some conversations. Nothing that I can speak to because it’s not formalized, but I appreciate you’re saying it as a media person yourself who’s so savvy on this stuff. So, a formal invitation. Hollywood, please reach out. And I do know that IEEE’s done a lot of great work being expert hosts. I think PBS and different shows, and yeah, I’d love that because I think it’d be fantastic, and either the initiative or the council having the ability to have instead of a Black Mirror, kind of a White Mirror.


As to say, if we’re only focusing on the dystopian stuff, you actually have to envision, and in the case of fiction, right. The positive messages so you can actually say, “Let’s imagine a positive future.” And like Black Mirror there’s a great episode in the latest season. Yes, I’ve seen the whole thing. I don’t know if you’ve seen the latest season of Black Mirror, but there’s one where a couple is dating in it, and I’m sorry. Spoiler alert. I’m going to give you the ending.


But what you find out is that this dating service, this couple is involved in this little … They’re part of what looks like a physical world and they date about 20 people, but they date each other first. They really like each other, but this app, this AI app, puts them with other people to sort of get them to find the perfect person, so they end up fleeing this physical space together and then at the end of it you realize they’ve been within like a virtual reality environment, like it’s been a creation.


And so they show the actual humans, not the VR version of their avatars or whatever, in a bar looking at each other, and it’s a lovely moment, and the show is done so well because it’s not too focused on the tech, at least for me. It’s not like, “I’m using my 14 space. Get all the [M14 18 16 00:37:09]” The world is just there, and at the end of it what I love about it is you see these two people. Maybe they were dating. Maybe they’re going to start dating. Now you’re not really sure, but the whole thing about the app and that experience they just went in was they were being introspective.


They were saying, “Is this someone I want to be with, or am I just relying on what might be a very good service, but this algorithm said I should swipe left and date that person. It’s not just because I think they’re hot. It’s because there’s a 90% chance that whatever.” This was them saying, “We aren’t going to rely on just that. Thank you, great technology, but we’re also going to rely on ourselves.” And it’s actually a lovely episode because it’s not overly dystopian. It’s quite hopeful.


Lori Schwartz: Right. It sounds like a nice mix of a lot of things going on, and I love that too because we all do have a lot of fears about Ready Player One and things like that, but that stuff is so, so powerful, right? Because it reaches so many people and then it sort of becomes ingrained in our consciousness I guess, and so that’s how we move through things. I see it even with my eight year old and the way that she talks to Alexa, and I know there’s been a ton of sort of funny articles about how rude these little kids are to Alexa, and I’m like, “I don’t understand where she learned it’s okay to yell at the AI,” so managing all of that is going to be a really interesting problem.


John C. Havens: Yeah, I’m glad you brought it up because we haven’t talked about a couple of, and this is on my part, meaning I haven’t brought up the three primary principles of the council and this reflects a lot of the work in the initiative too, but we mentioned extended intelligence and this is realizing that as people we already have systems in one sense that we live in.


Meaning your identity as you with your partner, and then you as a mom to your child, and then your friends, and the people you work with on your show. As well as the social media channels, et cetera. If I were just to try to sort of again, kind of copy your brain and say, “We’re good to go,” it ignores the holistic system of all those different things and when I say the environment it’s also not just because I’m a sustainability, greenie type. It’s recognizing that if you’re building tech that is oriented mainly to sort of get humans into, you know, our consciousness copied, and I’m not saying it’s wrong by the way I’m just saying if that’s what one does, then the environment becomes a very different focus because really then what you’re trying to do is mainly have the technology that could store human consciousness, which is silicon and air conditioning and keeping servers cool versus the planet.


The second big part of the council, and it’s also a huge part of our initiative work, is thinking about data. Because the thing about whether it’s any particular service that you have at home, can be a smart thermostat, whatever, is right now in general people understand their data as something that has to be protected by someone else. X company should be protecting my data, and when I sign their terms and conditions they are supposed to protect that.


Well, whether it’s something like Cambridge Analytica like a hack, or whether it’s just the nature of how for the past 10 years data has been collected, what that really means is we are tracked, and I’m not using that in a pejorative sense. I just mean our actions are measured or tracked from the outside in, and we are supposed to as it were rely on a government, a business to protect that data. Well, sure. Of course. That’s just table stakes. That’s normal.


However, with all these different devices and with all these different nuanced algorithms, we are at a pivotal turning point in human history about our data, where along with being tracked from the outside in we have to be given tools, all human individuals, to be able to essentially state your terms and conditions that can be recognized in a digital and even in an algorithmic level. Now, what that means is we envision, and we’re building a standard. For instance IEEE it’s called P7006. It’s about creating algorithmic autonomous agents for individuals.


This gets a little sci-fi sounding I think to some people, but right now you wouldn’t go online and use a computer without whatever encrypted software to protect yourself from getting hacked. But people don’t realize that they, their identity, who they are as they go into digital environments but soon virtual environments like Ready Player One. You have to go into these environments not just trusting that someone’s going to protect you, fine, but you have to sign someone else’s terms and conditions, but what are your terms and conditions?


So for instance, you’re a parent, right? Lori, you mentioned this. So you may have different requirements that you have for your family, your preferences for instance, that are hyper relevant for your child and you can list them out. It’s not rocket science. You know, ten things of this is how all my data is shared with my child. And I know I’m a dad so you’ve probably done this with camp. It takes like seven hours the first time, but then you can use it again and again if these are terms and conditions that are relevant for other things like medical data, whatever, in other circumstances.


All that is to say the Alexa you mentioned speaking in your house. We envision that before a device even actually, before you buy it but certainly once you bring it into your house, if you have digital terms and conditions that reflect your family values and ethics et cetera, that can be actually translated as it were algorithmically so it automatically happens. Then there’s a real bargaining as it were where you have parity. You have equality with whether it’s governments or businesses saying, “These are our terms and conditions.” Doesn’t mean by the way that they’re always honored, but it means you can state them. Anyway, I’ll pause there before I go on to the third thing because I know that was a lot.


Lori Schwartz: Oh no, that was good. You can keep going. I would love to hear the third thing, and then I want to talk a little bit about how do people engage? So give us the third one and then we’ll jump right into how we can find you and how we can engage.


John C. Havens: Cool. Okay, well just the data, and by the way a lot of people now are calling this data agency, or there’s a term called data sovereignty. It doesn’t mean you have to own all your data and it’s not anti-business or anti-government. Quite the opposite. It means that we’re now at a pivotal point where we can start to clarify how we want our data curated and then let people know, “This is how I want these terms and conditions to be honored,” and there’s a company, I mentioned this before I think, my friend Katryna. No financial interests. I just think it’s a great company called Meeco. You can check it out to see how you can protect you and your family, but more importantly, start to get more clarity to how you’re data is shared.


The third pillar is what we call for the Council on Extended Intelligence and it’s also a committee in the initiative, so this is something where there’s clear cross-pollination. We call it enlightened indicators, and the reason we call it that is technology at large, a lot of times when you build something, not just technology, the metric of success is how well did it do in the market? Did it sell? So it’s about profit, and everyone has to make money. You got to pay bills. Cool. But how the GDP was created, the gross domestic product, back in the 1940s when it was galvanized it was a very specific era in our history, particularly after the Second World War. What it meant was the world kind of lay in ruins. Exponential growth, meaning actual infrastructure growth, was a value. It made sense.


However, what that also meant is that it framed the sort of sense of shareholder exponential growth. I want to be clear. It’s not evil. These terms make no sense in one sense economically, but what it does mean at the end of every quarter when the door closes and a CEO goes in to talk to her shareholders, the shareholders aren’t going to be like, “Hey, are we providing value for people?” They’ll ask that and that’s great, but then they’re also going to ask, “How is our growth?” And if the CEO says, “Well we only made 1% but that’s cool because we’re covering inflation,” they will say, “No, legally, at least in the shareholder model, you were supposed to increase shareholder growth exponentially.”


So that means at the sort of underlying layer that often doesn’t get talked about in artificial intelligence conferences, especially with the future of work. I say this all the time. This is me, John, right? My opinion, but it’s based on a lot of research and understanding economic drivers, not just the technology. There’s not really a business motivation to keep humans around in jobs if there’s the opportunity to create smart autonomous and intelligent technology that will do those jobs “better” than the humans.


Humans cost money. They’re emotional. You have to buy them health insurance, and when you buy a system or a product it means your company becomes literally more valuable if you sell the company, and I remember interviewing a lawyer like five years ago and he said, “John.” We had the conversation about AI and work. He said, “What am I supposed to do? I could buy one piece of technology for 50 grand and it means sadly I put at least two people out of work who are new lawyers, but the job that those lawyers have is to look through 300 big tomes, legal tomes, to find like three examples of whatever, and machine learning.” And he’s right. Machine learning. This guy, he was totally right. Machine learning does that beautifully, and he said, “I can increase productivity by 70%. I make one purchase for $50,000, and because of the work that they can do I can now make $2 million more next year.”


So I’m bringing all that up to say, there’s what are called beyond GDP indicators. It doesn’t mean the GDP goes away. It’s a complement. And there’s things like the OECD Better Life index, or some people may have heard of the Bhutan Gross National Happiness Index. These are ways to measure societal what’s called flourishing, which go beyond just saying, “That country’s GDP went up, so that must mean they’re doing good.” Whereas when GDP goes up what it means is in general, it’s economically, with a very small set of lenses for what value even means, it goes up and typically it’s only going up to benefit a very small amount of people, and again back to the shareholder thing. If you’re building any technology where the metric of success is exponential growth, and then the technologies help speed up exponential growth, that seems to make sense but it’s a very different equation for humans, because human wellbeing is finite. Versus, you can’t have exponential growth continue where it doesn’t start to harm humans, and there’s a wonderful opportunity for business and government and society to move beyond GDP.


Lori Schwartz: So it really does connect back to a lot of bigger issues about how everything kind of flows. Business, life, everything really. I mean it’s in that big consideration set. It’s huge.


John C. Havens: It is huge and to make it a positive spin, and this is the message of IEEE’s work and the council, is on an individual level and I’ll say this to you, Lori, and anyone else listening. Has anyone asked you what you’re worth? And without asking about how much money’s in the bank, and are you able everyday to feel that you have worth and value beyond money? Because unfortunately, GDP a lot of times on the societal level, most people don’t care. I didn’t care up to four years ago. I’d be like, “I’m not sure what the acronym even is.”


Now however what I realize is, especially in the states, the trickle down theory, it actually means that a lot of times there’s this pressure to feel like until you make a lot of money you are worthless. I’m being hyperbolic, but not a whole lot, and that is not the message of wellbeing. That’s actually inaccurate from the positive psychology meaning the actual empirical science of studying what’s called human flourishing. There’s physical health. Most people know about that. There’s mental health, and here’s some quick, not great news. Suicide and depression are up. It’s a pandemic around the world, so yes, there’s a lot of great things to talk about.


My dad was a psychiatrist. My mom’s a minister. I tend to also focus on where’s the pain? And if in society depression and suicide are increasing at such a rate, by the way, AI. There’s a lot of great stuff being used to address that. Then one has to ask, is the societal metric of exponential growth helping us or harming us? And my answer, John, and again a lot of the work that people agree with and why we’re doing this together is, that’s not working. The GDP in and of itself is not working in the sense of having society say, “That number goes up and everyone’s good,” and this is Joseph Stiglitz, Jeffrey Sachs who’s on the council. This is agreement, right? Again, doesn’t mean that GDP goes away but it means also, how’s the environment doing? How are things like mental health doing? And if we can start to say that that same CEO that goes in every quarter with her shareholders, my dream is those shareholders will go, “Great. Our profits monetarily are doing good. How about the environment? How about societal metrics like depression?”


And she will be held accountable for making sure those numbers however they’re measured are not going up exponential but they’re being cared about. Again, that’s a form of introspection and will be reflected in society as we create technology to say, “Cool, this new technology will, whatever. Increase GDP. But are we thinking holistically about how it’s also going to affect the environment and the human state itself?”


Lori Schwartz: It’s so, so big and so important what you’re talking about. Before we sign off I’d love to just get from you, like how can we find out more? How do we find out more about you? And also not only the initiative but also the council. What’s a good place to dig in?


John C. Havens: Oh sure, and thank you again. It’s such a pleasure to talk to you Lori, and so the council, if you go to GlobalCXI, so Global, C, X as in xylophone, We just launched about a month ago, and the whole vision, the specifics about these three pillars are there. The IEEE Global Initiative. The easiest thing to do just because it’s kind of a long URL to speak audio-wise is if you google IEEE, the initials S as in Sam, SA Ethics Initiative, and sorry. I know it’s a lot, but that’s the way to get to the webpage, or the shorter version actually is we created a different site called Or once you go to the Global CXI website you can actually email me, and that’s probably the easiest. I can certainly then, any of your listeners, answer any questions we didn’t get to cover today.


Lori Schwartz: Give us your email one more time.


John C. Havens: Oh sure. The URL is, and on the bottom of the page, or you can click in the top. It says contact. When you click there you’ll get to me.


Lori Schwartz: Oh great. Great. Any last advice for anyone who’s trying to figure this all out? What’s a good place to start? Dig into obviously what you guys are publishing and what you guys are capturing, but if someone’s concerned about approaching an AI project with this ethics layer, you guys are the ones to dig into?


John C. Havens: Well sure. We’d be honored, but there’s so much great work being done. Ethically Aligned Design, the paper we created. The 14 standards working groups, they’re all free to join, so if someone has interest and some expertise in any of those areas we’d love to have you in those groups.


A quick fun thing. A lot of people talk about the movie Robot and Frank. Frank Langella starred in this movie. It’s a great movie to watch in terms of asking deeper questions about AI and robots and all that, because it’s a story of a guy who’s got dementia and his son buys him a home companion robot, but you can very well tell it’s a companion robot. It looks like a dishwasher on wheels, but Frank, the guy, he’s an ex-thief and he trains the robot to help him rob things, and so you start to get this real sense of the fallibility of humans, but also this lovely sense of humans in relationship to how they deal with life. Anyway it’s a great film to watch because it’s a comedy but you realize about halfway in, “Ah, this is stuff that could happen to me. I have a parent who is getting older. I am the person who has fallibility.”


And the last big question I’ll leave, which is from my book Heartificial Intelligence, and I hope this is useful to people is the question, how will machines know what we value if we don’t know ourselves? And I hope-


Lori Schwartz: Wow. That’s a great, great note. I’m sorry John.


John C. Havens: Cool.


Lori Schwartz: We have to wrap, but that is actually a great note to wrap on. We have been talking to John C. Havens, and John is the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and also working with the council, and I know I got these. I kept getting these confused, but it has been wonderful to talk to you and to look at artificial intelligence or autonomous from a different viewpoint. Thank you so much John.


John C. Havens: My pleasure Lori. Thank you.


Speaker 1: Thanks so much for listening to the Tech Cat Show. Please join Lori H. Schwartz again for another great program next Wednesday at 4:00 PM Eastern time, 1:00 PM Pacific time, on the VoiceAmerica Business channel and syndicated to the VoiceAmerica Women’s channel.