Skip to content
InTechnology Podcast

#37 – A Former CIA Officer and Congressman’s Thoughts on Cyber Security, AI and More – Part 1

In this episode of Cyber Security Inside, Tom and Camille talk with cyber security expert, Managing Director at Allen and Company, and former Congressman and undercover CIA officer, Will Hurd. While we’re sure he has plenty of exciting stories from his time with the CIA, the conversation steered clear of that (If he told us, he’d have to kill us) and instead covered other exciting topics, like:

•  Ethical use of AI

•  Facial recognition and AI biases

•  Artificial General Intelligence (AGI)

…and more. Don’t miss it!


Here are some key take-aways:

•  AI is a global obsession for good reason. It can be used for everything from diagnosing cancers to saving water in agriculture and introducing us to new music.


•  We have to be innovative and have the infrastructure and compute power in place to support AI.

•  Like all technology, AI can be used for good or bad. We have to ensure we’re making ethical use of it and that it’s unbiased, not discriminatory.


•  A growing focus within the AI and cybersecurity space is defending the training data from manipulation.

Some interesting quotes from today’s episode:

“And what undergirds all of this is cyber security. We’ve got to be able to defend our digital infrastructure to protect our intellectual property, to make sure that people aren’t selling our secrets.”


“When it comes to certain technologies like Artificial Intelligence, coming in second place can’t happen.”


“But then there’s also going to be downsides, like with any kind of technology. We have to make sure of the ethical use of these tools. It starts with making sure AI follows the law.”


“We can’t allow algorithms to be biased.”


“We’re already seeing Artificial Intelligence being used in a medical environment to diagnose cancers that the human eye hasn’t been able to do. You can look at your iris and determine a certain kind of cancer and you catch it, months, if not years in advance, which prolongs life.”


“Most technology and most tools can be misused, but they also have an upside. What we have to realize is this tension between using the tool and making sure it’s protecting our civil liberties.”


“I always get nervous talking about some of these sci-fi things. But we’re closer than we expect. And it still blows me away.”

Share on social:


Tom Garrison: [00:00:00] Our guest today is Will Hurd. He is a former member of Congress, cyber security expert, and undercover officer in the CIA. That is pretty cool. Uh, for almost two decades, he’s been involved in the most pressing national security issues challenging the country–whether it was in the back alleys of dangerous places, boardrooms of top international businesses, or the halls of Congress.
He was able to get more legislation signed into law in three terms than [00:00:30] most congressmen do in three decades–substantive legislation like a national strategy for Artificial Intelligence. He is growing the U.S. Trans-Atlantic partnership with Europe as a trustee of the German Marshall Fund. And most recently served as a fellow at the University of Chicago Institute of Politics.
So welcome to the podcast.

Will Hurd: [00:00:52] Hey Tom. Thanks for having me. It’s great to be with you.

Tom Garrison: [00:00:54] Boy that is, that is quite a background. I would love to dive into the whole background in, in the CIA. That would be just fantastic. But unfortunately we really can’t do that.

Will Hurd: [00:01:06] That’d be, that’d be for the next podcast.

Tom Garrison: [00:01:10] There you go. Um, I thought it would be best really to start with, you know, what are you focusing on now in terms of your priorities, especially relating to things around security?

Will Hurd: [00:01:24] Sure. So, so right now I’m a managing director at Allen and Company. It’s a investment bank and I’m helping advise clients, um, technology, clients that have a national security perspective. But when, when I look at, you know, the broader national security concerns from a technology perspective, it really is this new Cold War that we’re in with the Chinese Communist Party. The Chinese government has made it very clear that they’re trying to surpass the United States as the global superpower, and they’re going to do that by being a leader, a global leader in a number of advanced technologies that include semiconductors, 5G, AI, quantum computing space, uh, you name it and, and this is going to drive our economy, it’s going to impact our future. And this right now is one of the most important things.
And what undergirds all of this is cyber security. We gotta be able to defend our digital infrastructure to protect in, in, in our intellectual property, um, to make sure that people aren’t selling our secrets. This was a concern that I didn’t have when I first came into Congress, to be frank, And it’s something that evolved over time. And now I’m enjoying being in the private sector, working with academia and the private sector on these issues from a different perspective.

Tom Garrison: [00:02:45] You know, you used the word race, and, uh, I think of a race as a good thing. It brings out the best in you in terms of competition. Is that the right way to think about it in this case? Is it a fair race or is it a race in, in one way, but actually not really a race?

Will Hurd: [00:03:03] Well, I, I think, I think the way you characterize it is, is correct. Um, I would say in this race with China on global leadership, the U.S. can win because we can out innovate. Now we have to ensure that the global rules are fair and we’ve seen the Chinese government trying to influence, uh, global standards, you know, to, to their favor. Uh, we’ve been seeing them sealing technology for a number of decades and stealing IP and using it.
You know, we introduced them and allow them to come in and supported their entrance into the, uh, World Trade Organization and they’re not following some of those rules. Um, so yeah, uh, race is a good thing. But when it comes to certain technologies like Artificial Intelligence, coming in second place can’t happen. You know, there’s such a first movers advantage. This is one of the reasons why Vladimir Putin said “whoever masters AI is going to master the world.”
So that race. Yes, we, it, it, it brings out the best in us, but in some cases, if we don’t win, it’s going to have impact on an economy, it’s going to impact on the dollar. Our savings are not going to be worth as much when we actually retire or we’re not going to be able to purchase as many goods and services. You know, it’s going to have an impact on our way of life. And then all those things that lead to that, what does the Chinese government care about 5G? Because 5G is going to really empower widespread use of Artificial Intelligence. Why are they trying to double down on semiconductor manufacturing? Because the compute power that’s necessary in order to achieve AI, uh, requires a whole lot of, uh, of semiconductors. Um, so these issues are interrelated in this broader race.

Camille Morhardt: [00:04:49] So what areas of Artificial Intelligence do you think are going to benefit or have the biggest effect on the American people?

Will Hurd: [00:04:59] Well, so, so, you know, the, the, the real answer is, I don’t know. Right. And that, and that’s what makes this exciting. Uh, we’re already seeing Artificial Intelligence being used in a, a medical environment to diagnose cancers that the human eye hasn’t been able to do. You can look at your iris and determine a certain kind of cancer and you catch it, you know, months, if not years in advance, uh, which prolongs life.
We’re seeing it being used in agriculture. So, uh, you’re able to, uh, use less land, use less water, but you’re, you’re increasing your yields, right? You’re saving energy. It’s pretty fantastic.
From a commercial perspective, um, I’ve been exposed to music and bands and groups that I would never have been exposed to if, you know, my Spotify had made recommendations. So there’s a lot of upsides. Right. Um, but then there’s also going to be downsides, like with any kind of technology, we have to make sure the, the ethics and ethical use of these tools. Um, it starts with making sure AI follows the law. Right. We have a lot of laws already on discrimination. And so whether it’s somebody implementing an AI tool and the person implementing it, implements it wrong and it discriminates and that’s, it’s the implementers fault, but we can’t allow, um, algorithms to be biased. And so the, the, the issue around that is something you have to deal with. Facial recognition is always a topic that people have concerns with. And so, so this is there’s, there’s a lot of upside.
But, but we need to take advantage of technology for it takes advantage of us. And the only way we’re going to be able to do that is if we have a public and the private sector, working together on these technologies and recognizing we’re in this race, because the Chinese government is pushing all of their factors of production in one direction in order to get there before we do.

Tom Garrison: [00:06:51] When we talk about Artificial Intelligence, a story comes to mind. Back in the day, when you had humans competing in chess versus [00:07:00] computers. And for the longest time, humans could easily be computers. And then, and then they, the computers got pretty smart and, uh, it became kind of a toss up who was gonna win. But then they moved to Artificial Intelligence. And since the computers with AI have been competing against people, they can’t be beat. And to [00:07:30] me, that’s an interesting angle of how powerful Artificial Intelligence can be for good or for bad.
And my concern is not about Artificial Intelligence for the good. That that’s, uh, you know, we all, we all are gonna benefit from that. My concern is that it becomes a very, very potent capability that when used for bad, humans may have a very difficult time beating.

[00:08:00] Will Hurd: [00:07:59] For sure. And these are, these are reasonable concerns, but we can say that about every technology or tool. Um, most technology and most tools can be misused, but they also have an upside. What we have to realize is this tension between, uh, using the tool and making sure it’s protecting our civil liberties right, is really one of the concerns that, and it gets out, it gets out of control. Artificial Intelligence to me is equivalent to nuclear fission. [00:08:30] Nuclear fission, and that chain reaction produced produce something to efficient when it’s controlled, allows a nuclear power. Great power source, you know, clean and don’t want forever. You know, it’s a great use. When that chain reaction within neutral efficient is uncontrolled it leads to nuclear weapons. Right. And so, so this is kind of the debate that we’re going to be having around Artificial Intelligence.
However, I go back to, we are in a race, so we have to get these issues around ethics and how to protect civil liberties within Artificial Intelligence. We have to address those while we’re building this technology because the Chinese government doesn’t care. They do not care about civil liberties. Right now, they are using this tool in places like Shen Jong province to put their ethnic minority, Uighurs into internment camps. Right. So we see this kind of thing already happening. And the Chinese government does not care.
And I’m very clear when I say Chinese government. I mean the Chinese government need the communist, the Chinese Communist Party. I don’t mean the Chinese people. I definitely don’t mean a Chinese-Americans. The amount of, of hate crimes that have been directed at our Asian-American brothers and sisters is just the saying it’s, I’m very precise and all of us have to be precise in the language we’re using it in this race.
When I first kind of got involved in doing things around Artificial Intelligence when I was in Congress, I was talking about Artificial Intelligence to people and if they were older than me, they would always say “Hal 9000. That was a, the creepy, the computer on the spaceship and in 2001 Space Odyssey.
Yeah, I’m saying, remember that it was like creepy voice and it was scary. Yeah, exactly. Right. So, so they were younger than me, some would say Ava; she’s that was like the killer robot from “Ex Machina,” which is just a fantastic movie by the way. And then want to say is going to turn into the Terminator right now. Uh, we’re we’re far from getting into the Terminator, but, but the getting to what’s called AGI, Artificial General Intelligence. That’s when you have an algorithm that can actually truly operate like a human. That’s kind of the platinum standard of Artificial Intelligence, we’re closer than most people expect. So, so AGI, Artificial General Intelligence is not going to immediately lead to the Terminator because you know, you, we don’t have materials that are indestructable and things like that. But you could have in essence, an algorithm that just talks to you, Tom would just focuses on you. Or Camille it’s an algorithm that’s just focused on you and learning about you and talking to you in a way a human can talk to, to try to influence you to do something. Imagine that’s done the 2 billion people.
Woah! What are the consequences? Right? So these are some of the conversations that we have to, we have to have.
Tom Garrison: [00:11:48] By the way, if you haven’t seen it, it’s kind of a creepy movie, but there’s a movie called Her.” And that sort of Artificial Intelligence, Scarlett Johanson voice. She was fantastic. It is really, really interesting for, for Artificial Intelligence and somebody who basically falls in love with her.

Will Hurd: [00:12:07] I always get nervous talking about some of these sci-fi things. But we’re, we’re closer than we expect. And it still blows me away. The book I Robot, right, uh, written 70 years ago. He predicted that the way you’re going to influence AI in a negative way is to corrupt the training data and, and that’s, that’s true. And that’s one of the things and that growing area within, within the AI space and cyber security specifically is defending the training data from manipulation.

Camille Morhardt: [00:12:43] When you’re in Congress, how do you figure out what to worry about what to focus on? You’re looking at security from kind of this broad public policy perspective. I assume you’re getting reports that the public doesn’t necessarily have. And you’re also looking at concerns from your constituents, which I gather could be anything. So how are you deciding what to look at and what to focus on?

Will Hurd: [00:13:08] It’s a great question. And so it starts with, you know, I, uh, one work on things that you enjoy, right? like, and then I have an expertise. And so one of the reasons I ran for Congress. Was because I had an expertise in national security and I was in the CIA. I was responsible for recruiting spies and stealing secrets. Best job on the planet.
And it was awesome working on the most important national security issues of the day.
And I felt like members of Congress were negating what my buddies and I were doing and putting ourselves in by putting ourselves in harm’s way. So that’s why I ran for Congress. And so I said, “let’s focus on, I want to be a leader on national security because that’s my background. That’s my experience.” What do I know about health care, other than getting it right? I don’t have any unique perspective or insights or, and so I tried to stay very focused on, on national security.
Now when I won Jason Chaffetz, he was in Congress. He was a member from Utah. He was the chairman of the Oversight Government Reform Committee and he wanted to create a subcommittee on Information Technology. I was the only member of Congress who had helped start a cyber security company. So he asked me to run this committee. And so in that position, we were focusing on emerging technology, IT procurement cyber security. And so through that lens is really where I started getting experience. And then, you know, how does this impact my district now, San Antonio, which is about 50% of my old district is cyber security city USA. We have the largest concentration of cyber security professionals outside the national capital region and the national capital region. I remind people is the Pentagon it’s CIA, it’s NSA, right? Like DHS, CISA Right? And so San Antonio, this, this matters and San Antonio is aware and attuned to this. Uh, but it’s also a responsibility to focus on issues that are important for the country. And I do believe that this race for, for global leadership and advanced technology is a generation defining challenge. And so that’s why I, I spent time trying to focus on this

More From