Skip to content
InTechnology Podcast

#74 – From Large-Scale Cyber Attacks to AI Developments: The Biggest Cybersecurity Events of 2021 Reviewed

In this episode of Cyber Security Inside, Camille and Tom get into the biggest cybersecurity topics of the past year with Maribel Lopez, Founder and Principal Analyst at Lopez Research. The conversation covers:

  • The large scale attacks on infrastructure this year on a wide range of companies.
  • Where the attacks were occurring to have the biggest impact on systems.
  • The development of artificial intelligence and how far along we are in that area.
  • Ways to convince companies and decision makers to focus on cybersecurity.

And more. Don’t miss it!

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Here are some key takeaways:

  • One of the more surprising things that became big this year was large scale attacks on a wide range of companies (gas lines, hospitals, schools, etc.). Things now feel more targeted and directed than just opportunistic.
  • Right now, companies trying to fix vulnerabilities are highly dependent on consumers updating their systems. So, attackers can just wait until a vulnerability is pointed out by ethical researchers, and then count on not everyone updating.
  • Attacks expanded into areas like the supply chain, because it isn’t just about the core systems of the company anymore, it’s about every part of the process that relies on technology.
  • Having old infrastructure can really hurt a company or a business. And at this point, people can’t say they didn’t know, since there have been so many examples to learn from. The responsible thing to do as a company is to have modern infrastructure. It is very expensive, too, of course.
  • With AI, there are big questions about how the models they are developing based on information might impact privacy. 
  • Data loss has been a huge topic as well. When a company loses a bunch of data, we have to think about how they were collecting the data, how it was encrypted, how AIs were accessing it and analyzing the data. How do we collect data in a safe, privacy centered way, while still getting useful information that will help create new business models?
  • The more connected things are, the more risk there is if an attack happens. Although connection in infrastructure might make the infrastructure run better, if it were to be attacked it could affect huge systems instead of an individual instance. For example, consider if all traffic lights were linked. Now, hacking the traffic lights can cause a huge grid-lock.
  • AI has been great to help run and develop security software. However, the other side, the hackers, are also using AI to find vulnerabilities. It is a constant battle. But, we are still early in the AI process and a long way to go in research and development.
  • One way to help convince companies to focus on cybersecurity is to show them the monetary impact of what they will lose by being shut down, or what they might have to pay by losing customer data. Another way to convince them is to look at brand reputation for getting attacked and losing data. 
  • When looking at security, we need to look at the problem first. Often, we have the technology and look for places to apply the technology. That is backwards.

Some interesting quotes from today’s episode:

“These were things where it’s like, if you can figure out what the potential vulnerabilities are there, look for them, make that type of attack, you know that you’re a success in getting paid as a malicious actor is pretty high because it is a critical infrastructure.” – Maribel Lopez

“I think that the first stage, you know, before we even talk about regulation, is just for every organization to try to figure out: is your IT infrastructure holding you back? And I’m not talking about an agile, digital transformation way. I’m talking about an it-will-shut-your-business-down way.” – Maribel Lopez

“It’s not just security, it’s also privacy. It’s also functional safety or personal safety. And then you’re very quickly kind of moving into the ethics space.” – Camille Morhardt on how tech has become part of medicine, space exploration, and more

“The greatness and the sorrow of AI is Ai can take a lot of data, and it can find a lot of patterns and insights very quickly.” – Maribel Lopez on how AI can be very helpful, but can also open the door for attackers 

“If you ask me where we are and we put it into, say, like a baseball analogy, we’re probably in the fourth inning of what’s going on with AI.” – Maribel Lopez on the development of AI

“One of the things we talked about a lot is using AI just to figure out if you have been breached – if there’s some activity that’s going on within. You know, somebody who’s just lying in wait for the perfect data or to set up the perfect attack. You know, there were statistics that it took 9 months to a year for a lot of organizations to figure that out on their own.” – Maribel Lopez

“When somebody comes to you and says, ‘Hey, are we secure?’ That’s a question that nobody can really answer truthfully. But it is a question that is legitimately asked to every senior business executive at some point in their career.” – Maribel Lopez

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:40] Camille Morhardt:   The following conversation was recorded Live from the Greenroom of Intel’s iSecCon 2021, otherwise known as Intel Security Conference.

[00:00:53] Tom Garrison: So today our special guest is Maribel Lopez. She is Founder and Principal Analyst at Lopez Research, following the latest in technology trends. So welcome back to our podcast, Maribel. 

[00:01:09] Maribel Lopez: Thanks. Thanks for having me. I can’t believe it’s been about a year. Wow.

[00:01:14] Tom Garrison: Yeah, Maribel was our very first podcast guest.  And this is back in the day before we did video. So today, what we wanted to do was really talk about some of the trends that have evolved over the last year, and then also get your perspectives.  From a very high level, what are some of the trends that maybe caught you bit by surprise or maybe a little bit bigger than you’d anticipated from when we spoke a year ago?

[00:01:43] Maribel Lopez: I think when we spoke, one of the things I was really focused on was this move to remote work and the potential for that to open up a lot of security vulnerabilities–people having, uh, older hardware that might not have been patched, a lot of phishing.  Ransomware was talking about. Yeah, it wasn’t sort of the thing that was going on right now.  And you know, one of the things that’s really changed as we’ve seen so many attacks on such a wide range of companies and really in some devastating ways–shutting down gas lines, um, shutting down hospitals during some really intense times. You don’t want to have issues with hospitals during COVID. That was just a real negative thing.

So. Uh, really, I think the attack surface shifted a little bit from what I originally thought it would be where it was an attack on individuals that would allow you into company and just really attacking companies that had older infrastructure.  Now, how they found them, maybe they just stumbled upon them, but it seemed to be a much different animal in terms of much more direct in that I thought it was going to be.

[00:02:52] Camille Morhardt: You’re saying now it is more directed and targeted, whereas before it was more opportunistic?

[00:02:57] Maribel Lopez: I feel like there’s definitely more targeting going on than there have been in the past. I also feel like people thought it was a really good opportunity. The malicious actors really thought about it and said, “Hey, now is a time when there’s probably a lot of different places we could find vulnerabilities.” So I think they were always out there, but it seemed like the level of attacks just got amped in this particular timeframe.

[00:03:27] Tom Garrison: Yeah. I like to point out to, to folks when you’re a malicious actor, you don’t have to necessarily be on the cutting edge of security research. What you can do is wait for vulnerabilities to be found by ethical researchers and to be talked about at universities or, or in, at a security events, and then count on the fact that people won’t update their systems.

So if you’re an attacker and you want to attack a company, you can just kind of go to the bookshelf and pull off the vulnerability that you choose to try to exploit and then just look to see what companies haven’t updated their systems and are still vulnerable to that attack. 

[00:04:12] Maribel Lopez: That is so spot on, Tom.  (laughs)  Really is.

[00:04:16] Camille Morhardt: I know we’ve talked a lot over the last year about kind of the nature of attacks changing and obviously a migration toward attacks on firmware and hardware. But do you think that there’s also been like an increase on critical infrastructure or supply chain? It feels like that to me; it feels like the attacks matter more because they affect broader numbers of people in more important ways.

[00:04:41] Maribel Lopez:  One of the things that we’re seeing is like, if you built a strategy around it, it’s like what type of company would have to pay? How much would they be willing to pay? And the more pain that is, the more critical that infrastructure is either in terms of supporting a bunch of people or in some cases life-threatening right.  These were things where it’s like, if you can figure out what the potential vulnerabilities are there, look for them, make that type of attack, you know that you’re a success in getting paid as, as a malicious actor is pretty high because it is critical infrastructure. 

There was some really interesting discussions around the pipeline, for example.  It’s like, “should you pay it, should you have not paid?” And these are hard decisions, but I really felt bad for some of these executives because they were like, “Hey, we have to get this back up and running.”  So, you know, it’s, it’s, it’s one of these matters of if, if you have an issue and you can’t resolve the issue quickly, then you could see why they would have paid.

But having seen that, I think that made it even more interesting for more people to go out and say, “okay, let’s try to find these type of vulnerabilities.” I mean, okay, if you, if you attack the garden variety user and they get locked out of their machine, how much is that worth? Probably not that much. Whereas if you attack a hospital, you attack a transportation system, you attack a pipeline, there’s like a whole different level of concern that happens there.

[00:06:12] Tom Garrison: The concern element that you mentioned, I think also translates to now government interests and regulations, and you saw a bunch of interest in the US around critical infrastructure and a sort of protecting it against foreign adversaries.  But you also saw some of that expanding into even things like supply chain concerns, as well. So there’s the direct hacking elements of it and then there are the elements of, “well, how much of my infrastructure depends on, you know, various forms of technology? And do I need to regulate that more closely?”

[00:06:53] Maribel Lopez: Um, there’s certainly been a discussion of regulation. You know, one of the things I think is interesting is, is that in my opinion, there were many industries that for a lot of years under-invested in technology–that are using a lot of legacy technology and they’ve actually opened themselves up to this.  And healthcare is a great example of this. Healthcare was very far behind in terms of moderately. Shouldn’t have it infrastructure. And I think we’re starting to see a change in that. So this was a terrible way to get there. 

But I think that the, the first stage, you know, before we even talk about regulation is just for every organization to try to figure out, is your IT infrastructure holding you back?  And I’m not talking about Agile, digital transformation way. I’m talking about an, a, it-will-shut-your-business-down way.  There’s two levels of concern here. And I think there’s now a much healthier recognition of even if we didn’t want to go there because of competition because of making more money, because of all the other things that we as analysts have certainly spoke about the digital transformation era; now, I think there’s a realization that if you have old infrastructure, that infrastructure might not be supported for new software releases, patches, and the like, and you are really open to a whole world of hurt.  I feel like it’s on you as an organization at this point. You can’t say you don’t know anymore because there’ve been enough examples just over the past year to say, “oh, I should’ve been looking at this.”

So if you’re not looking at it now, um, there there’s potential legal actions that could happen out of that. And it’s just irresponsible. So I think at this point, we’re in a different place in terms of talking about it. We might get down to the point where there’s regulation of like a minimum security standard for certain types of data in certain types of industries; that would be to me, regulation just takes too long.  Like you’re well beyond what needs to happen by the time anything gets approved. So it’s probably better for you to just look at it having people come in and do security audits to figure out where your vulnerabilities are. Think about how old the certain infrastructure is–is it five years old, 10 years old, 20 years old in some cases (laughs)?  Uh, this is a real problem for, for certain organizations, an expensive one too, I have to say.

[00:09:13] Camille Morhardt: Well, it’s getting more and more intimate as well, because I think we’re starting to see, we talked again, you know, for a decade about IOT. And I think, you know, maybe that was even over hyped just a little bit, how quickly everything would be connected, but we are seeing, you know, wifi modules added to medical devices that, you know, sometimes reside inside your body and our critical infrastructure is moving to space. Right? So we’ve now got satellites and things orbiting the earth that we might need to be doing patches of. 

So it seems to me, we’ve got either extremes on the mini side and the maximum side of even just the ability to update systems in those kinds of conditions. It’s not just security. It’s also privacy.  It’s also  functional safety or personal safety. And then you’re very quickly kind of moving into the ethics space. I think once you touch on those things.

[00:10:06] Maribel Lopez: Absolutely. I mean their privacy and ethics and AI. So one of the, one of the really interesting pieces of research and starting to happen for a lot of organizations right now is we’ve got AI and we’ve got privacy and there’s some interesting ethical concerns around what AI models will create based on the data they have. So a lot of people are building models an with maybe not the most representative data samples, but there’s certainly a lot of concern about the privacy aspects of data. And now there’s a lot of regulation around the privacy aspects of the data that you’re collecting and how you’re building models with them and potentially very huge fines associated with that, if you’re found in violation of privacy. 

And there’s a lot of data loss that we’re dealing with right now. And one of the things I think is really interesting is I don’t know how many times, um, either of you get a text from a bank that’s theoretically your information’s compromised or an email of such; some of the attacks are pretty sophisticated looking at, it could be something that you’d want to look into.  But let’s just say that I do that and somehow, you know, either that, or somewhere, some manner shape or form someone’s losing data–like T-Mobile just lost a bunch of data from their customers. Uh, we all know about the financial reporting institutions losing a bunch of data. What happens then is we have to start thinking about like, well, how was that data collected?  Was it encrypted? How are we doing AI analytics on it? Should we be looking at things like homomorphic encryption? is differential privacy enough as an example? Right? Could I still somehow figure out, “well, that was probably really Maribel there?” Like, is there, is there enough, uh, separation in that aggregate data?

So I think there’s a lot of concern right now about how we’re going to collect date, how we’re going to secure that data and how we secure that in the context of analyzing that data to create new business models and other things.

[00:12:13] Tom Garrison: Part of what you were speaking about. There reminds me of a conversation Camille and I had with a guest just recently. And I was sharing that, you know, back in, when I was in college, my student ID was my social security number.  Everybody’s student ID was a social security number. And now of course, you’re like, “oh my God, I can’t believe that.” But as the guest pointed out, well, what could you do with your social security number back then?  Sharing that kind of information was not as critical. But if you think about that, not from a privacy standpoint, but you think about it from a security standpoint and the more interconnected things are and the more that we rely on these new capabilities, like for example, the fact that maybe all the traffic lights now are all connected to each other, that poses now a very different sort of security angle as well.

So yes, we get the benefits of having smart traffic. But now that opens up a potential security exposure that they can create chaos in a city and create gridlock if that infrastructure gets hacked. So it’s great on one hand, but from a security perspective, as security professionals, we have to realize now that our job is getting bigger and bigger and bigger by the day, because of all that interconnectedness that’s happening in it really about every aspect of our lives. 

[00:13:38] Camille Morhardt: Actually, I want to pile onto that really quick because we, I, I did a, What That Means with Claire Vishik, who’s a fellow at Intel and she focuses one of the things she focuses on is privacy. And the very example she gave of where security and privacy can sometimes even conflict was just what you said, Tom, the sort of traffic light.  Consider–it was hypothetical she said, right–but consider the concept of traffic lights all linked also to all of the automobiles. And so they may be reading information about your car, where it’s going it’s plans. However, that then means your privacy has to be preserved, also. So it’s not the business of the traffic light to link the trajectory of your car with the individual in the car. But in preserving the privacy, you sometimes run into these conflicts. Like if you’re encrypting all of the private information within the vehicle, you may not be able to unencrypt it in time, certain information to make sure that the track, you know, the traffic lights or signals or whatever can adjust for the safety of the situation. So just as a hypothetical, you know, we’re seeing how these things are super interconnected at this point. 

[00:14:51] Maribel Lopez: Yeah. I agree with that. There’s been a lot of discussion about what kinds of data you collect about where individuals are. And that’s been another thing that’s been at odds with say, government employees, you know, understanding like where your public works people are at all times.  You know, on one hand you want to be able to know where they are, send them to the right place so they can, uh, act on emergency, uh, dig out snow banks, whatever it needs to be.  Uh, on the other hand, there’s the hmm. You know, where everybody is. Waking hour of their day kind of thing. And some people didn’t like that. So we’re, we’re also running up against the concepts of how the workers feel about it. And in addition to the security of that.  So there’s multiple layers that we’re trying to unpack them.

[00:15:38] Tom Garrison: Yeah. Well, I wonder if we could maybe extend to another big topic that we’ve all read about. We’ve talked about a lot of us have talked about it and that’s Artificial Intelligence. What do you see from AI? What’s hype what’s reality?

[00:15:53] Maribel Lopez: The greatness and the sorrow of AI is AI can take a lot of data and it can find, a lot of patterns and insights very quickly.  And I think a lot of malicious actors that are sophisticated malicious actors, as opposed to the group we were talking about earlier are in fact, using that to try to figure out are there specific types of vulnerabilities, but should go after specific categories of equipment?  You know, you can send AI out trolling to look for things that have a certain characteristic to them in the net.

So I think that it’s, it’s a constant battle between the good guys and the bad guys, and both of them have AI tools. So I think that AI has been very good for a lot of the security solutions we have now; almost every security solution I can think of is, is AI-driven in some manner, shape or form. If it’s not an actual physical hardware thing, if it’s a software thing, it has an AI component to it, but that is also being used on the other side.

So if you ask me where we are and we put it into say like a baseball analogy, we’re probably in the fourth inning of what’s going on with AI. We have a lot of ways that it could still be improved. You know, you still have to go and look through models to make sure they’re doing what you thought they would do, and that they’re learning the way you think they should be learning.  So that’s not a whole autopilot thing at this point. So that means that there’s still a lot of human and the AI loop; but, you know, AI is definitely helping malicious actors do their jobs better and therefore you need AI on the other side, I guess is what I would say.  The counterpoint to that is. So, you know, if they’re using AI, you as an organization, better make sure that you have some serious chops that have, you know, AI in the security software that you’re looking at.

[00:17:43] Tom Garrison: Uh, I think, uh, about another guest. So we had where we were talking about artificial intelligence and I shared a story which I’ll get slightly wrong, but a very, very long time for humans to create a computer that could beat the best chess player. And that happened, it became, you know, newsworthy that, that, uh, great chess player was beat by the computer.  And then evolution continued on the computer side and to the point now where computers, they always beat humans now, basically. And, uh, somebody, I, again, this is where I got the facts slightly wrong, but the general gist of this is accurate. And that is that some researchers trained a computer with AI playing chess. And within a very, very short time–I forget what it was a few weeks I think it was in that order of magnitude–the AI models cannot be beaten. The only thing that can happen is, you know, ties. So to your point, the fact that if, if one side of the equation is using Artificial Intelligence to attack another side, unless, you have Artificial Intelligence to help you, it’s only a matter of time until you will be defeated. Right? 

I agree with you completely, that both sides will have we’ll use Artificial Intelligence and, you know, detecting anomalies, making sure that’s happened, but for sure, there’s way more hype out there today than reality. We’re still very early in this process with Artificial Intelligence and we have a lot more to go. But there is a fertile ground of research and work that can happen and will happen. 

[00:19:30] Maribel Lopez: I think there’s, there’s big AI and little AI, right? So, you know, if you, if you relate AI to a human, you know, AI is like beyond toddler stage, maybe going towards teenager stage. And, and the reason that that’s important is I think we spent a lot of years getting- AI was a child for a long time right.  Now I think AI is becoming much more sophisticated.  But as an organization, sometimes I think we get hung up on like how advanced the AI has to be, as opposed to looking for ways that AI can be like really, truly meaningful today without being like super like you’re at the end of the strategy.

So let me give you an example. You know, one of the things we talked about a lot is using AI, just to figure out if you have been breached–if there’s some activity that’s going on currently within, you know, somebody who’s just lying in, wait for the perfect data or to set up the perfect attack.  You know, there were statistics at that took nine months to a year for a lot of organizations to figure out on their own.  And if you can put in an AI solution and figure that out in a matter of weeks, that’s a huge win because they don’t know that you’ve done that; you can actually work on scrubbing that leak out of your system. And I think that’s very super powerful. And when I say this, I don’t want it to demean like the sophistication of the AI.  That’s still very sophisticated AI that does that. But sometimes I think we get wrapped up in the all the way over there and the massive mega, deep learning and all this stuff we can’t do when we have some really great things that, you know, probably every organization could benefit from today. And that’s what excites me about AI is that, you know, that that’s something that you can really sink your teeth into and have some value today.

[00:21:12] Camille Morhardt: I know, you know, Maribel, you work with a number of Fortune 30 companies. So I would love to hear your insight into how are they convincing the board or the CEO or their peers that cyber security is something worth investing in? And, and are they having luck? 

[00:21:31] Maribel Lopez: So if you look at some of the reports of how long, certain companies–large companies have been down–as a result of something, and then you say, “okay, let’s take this in our company. And if we were down for four days, how much money would we lose in four days?”  Then you look at another metric would be say, “okay, let’s say we’re going to lose a lot of customer data as a result of this. Well, let’s look at, who’s been fine for customer data and how much that could be.” So a lot of times showing a monetary output is something that people are really interested in.

If you’re talking to the CMO, that could be like an interesting discussion then, because there’s money, but there’s also brand reputation. So then, then it’s examples of what happens to brands when they lose a bunch of data?  Like, do they lose customers? Do they lose revenue? Do they have to spend a lot more money to actually get themselves back to the same place they were? Did they have to do a lot of advertising? Did they have to do donations? Right. 

Honestly you’re going to go and at some point, and you’re going to say, “We need the blah, blah”–insert the newest most wiz-bang security technology name here. Right? And then somebody is going to look back at you and say, “but haven’t we bought enough security widgets, right?  Shouldn’t we be protected by now? Why do we need it by this extra thing?” And there’s a couple of reasons you need to buy the extra thing technically, but typically they don’t understand the technical elements, but they will very much understand that, you know, when somebody comes to you and says, “Hey, are we secure?” That’s a question that nobody can really answer truthfully. But it is a question is legitimately asked to every senior business executive at some point in their career and they want to be able to say that they did the things that they thought would make them secure.

You can’t predict everything. So the question is how many of those are there? So I think that the people that are listening to this have to say, okay, talking in their terms about what is the risk X type of customer data might be at risk or a Y amount of dollars might be at risk or the brand reputation of the company because we’ve lost X, Y, Z, or there could be serious safety ramifications depending on the type of company it is.

So I think we have to turn around instead of telling them that you need X, Y, and Z technology and say, “you know, we’ve discovered there’s some vulnerabilities and we think that as a result of this, this could be the impact to the company.” 

[00:24:05] Tom Garrison: Yeah. I would just add that. The advice that I would give is start first with the problem you’re solving, as opposed to a technology-first implementation or argument.  When you’re talking to, uh you know, your peers to decision makers within your company, or eventually even to customers, you need to first bring them into the world of what is the problem?  And you as the designer or the developer, or whatever need to make them care about that problem. And if they don’t care about the problem, they’re not going to care about your solution.

So first start with the problem. And then from there, once you have a great understanding of what problem really matters, then develop a technical solution to solve that problem. And I see people way too often flip that around and they start with the technology and they look for a problem to solve with their technology.

[00:25:08] Maribel Lopez: Well, and, and most business executives don’t understand the technology. They don’t understand the nuances of the technology. So in absence of them understanding the nuances of the technology, you have to just tell them why it matters at the end. Right.  What’s at stake for you, how’s it going to impact your job?

[00:25:24] Tom Garrison: Exactly. Well, so Maribel, thank you so much for the conversation so far. But as I mentioned before to our audience, we have the last segment of our podcast. We always end in a segment called fun facts. Uh, we share some useless piece of information or trivia, and sometimes it’s useless and sometimes it’s useful. We’ll see. But I thought I would turn it over to you first with what is your fun fact you’d like to share?

[00:25:50] Maribel Lopez: Okay. So I recently moved to South Carolina and I was reading an article the other day. And it was that according to the Columbia State, South Carolina actually produced more tons of peaches than Georgia, which is a big deal since Georgia is the Peach State.  So that’s my fun fact. 

[00:26:11] Camille Morhardt:  Thems is fightin’ words!

[00:26:14] Maribel Lopez:  I now (laughs) I’m guessing somebody wanted to have bragging rights for a year and I got a super chuckle out of it.

[00:26:23] Tom Garrison: That’s right. Little things matter to people, for sure. Alright so Camille, what is your fun fact? 

[00:26:30] Camille Morhardt: Yeah, I got to tell you, I, I, my experience this weekend was amazing because I actually went mushroom hunting in the coastal mountain range in Oregon.  And went out with somebody who has done it a lot, and another person has done it kind of her whole life sort of, but they were like, you know, “we may very well not get anything.  But there’s been a lot of rain and the right amount of sun and this and that.” So we drove up into the mountains and hundreds of chanterelles later, it was like, incredible. I couldn’t believe it. You know, my takeaway was there’s hundreds within an acre. So I was trying to figure out how common that was. And I did find out that there’s anywhere between zero and some sources say 250 and other sources say 450 actual mushrooms in an acre.

So you can come on zero or you can come on 450 mushrooms and a maker. Clearly you’re going to cover a lot of ground chanterelles don’t grow on any kind of medium that you try to do a, an, a commercial growing facility, so they have to be picked wild. And they’re usually in Douglas for, or hemlock forests.

And they’re often under salal uh, vegetation kind of a low brush, and they’re also often kind of by the side of the road where the drainage is washing away. So if you want to get out and look for some, go have fun. It was really a spectacular experience to be in a dripping forest, seeing these little golden sunshine spots.

[00:28:00] Tom Garrison: Great. That’s great. I’m going to stick on the Oregon mountains, as well. Unfortunately, this story has a, has a sad ending. I did not know that in World War II, there were actually civilian casualties on continental United States. And in fact, those civilian casualties were in the mountains of Oregon in Gearhart Mountain, Oregon. And those casualties were from balloon bombs that were launched from Japan. 

The idea was that these bombs would float over the continental US–some of them made as far east, as Iowa. Uh, the US military got most of it. But in this particular case, there was a family of six, five children that happened to come across one of these balloon bombs that had come down and they, uh, they triggered it and they were the only civilian casualties in World War II on the continental US.

[00:28:54] Camille Morhardt:  That is dark, Tom. 

[00:28:57] Maribel Lopez:  On that rousing note, I think I’m going to have to go. (laughs)

[00:29:03] Tom Garrison: That’s right.  That’s right.  I didn’t know, I wouldn’t call that a fun fact, but it is I guess, interesting, uh, that there was that tactic in the war. But with that, let’s close out this podcast and thank Maribel again for joining. And we thank all of our audience, our live audience, we made it through. There was no cursing. It was great. And we welcome you to view our podcast online Cyber Security Inside podcast, and we look forward to having you join us there.  And thanks for joining today. Take care everyone.

More From

#88 – What That Means With Camille: Developer

Images of podcast hosts

#87 – Privacy Considerations in the IOT Era

WTM images of host and guest

#86 – What That Means with Camille: Threat Detection Technology – Stopping Cyber Risk at the CPU