Cognitive Electronic Warfare

In this episode, host Ken Miller welcomes guest, Dr. Karen Zita Haigh. They discuss how cognitive systems and artificial intelligence (AI) can be used in electronic warfare (EW) and how these new technologies force us to rethink our underlying assumptions about a conflict environment.

Ken Miller (00:10):
Welcome to From the Crows Nest, a podcast on electromagnetic spectrum operations, or EMSO. I'm your host, Ken Miller, director of advocacy and outreach for the Association of Old Crows. Thanks for listening. On today's episode, I sit down with Dr. Karen Z de Hague to discuss cognitive electronic warfare and artificial intelligence. Before I introduce her, I want to draw your attention to an upcoming episode of our sister podcast, The History of Crows, celebrating the contributions of women in electronic warfare as part of Women's History Month. This episode will be available wherever you listen to From the Crows Nest. All right. My guest today is Dr. Karen Zita Haigh. She's an expert on artificial intelligence and cognitive electronic warfare. She's an author, speaker, and consultant currently associated with Haskel Consulting, and she has a distinguished career across the technology industry. She also wrote a recent book with her co-author Julia Andrusenko, entitled Cognitive Electronic Warfare: An Artificial Intelligence Approach. Dr. Haigh, it's great to have you on From the Crows Nest. Thanks for joining me.

Karen Zaigh (01:06):
Well, thank you for having me. Please call me Karen. When people say Dr. Haigh, I feel like someone's punishing me.

Ken Miller (01:11):
Sounds good, Karen. So before we get into the book, I want to basically provide a little bit of context for the discussion. Because we love to use the words, in our community, "cognitive electronic warfare," as if we all know what it means, but we don't. And there's probably a lot of different interpretations to that, which I think when going through your book, you talk about different models. But at the beginning, there's really the problem that cognitive EW and artificial intelligence as a tool set is trying to solve. So I wanted to start by talking a little bit about the problem that we find in traditional approaches to electronic warfare as our thinking of the electromagnetic spectrum is changing. I wanted to open that up to you and to talk a little bit about the problem that we face with traditional EW as a way of presenting the opportunity to AI can bring to table.

Karen Zaigh (02:04):
Yeah, so traditional EW approaches assume that the remitters that you're interacting with are relatively static in that they won't change their behaviors. And over the past 15 or 20 years, we've gotten radio meters that are far more software-defined. And that means that we have the ability to change them on the fly, in milliseconds or faster. And so you've got, first of all, a timeline that's too fast for a human to respond to. Humans don't operate in sub millisecond timeframes. Secondly, the complexity is very, very high, far more than a human. Humans are pretty good at optimizing two, maybe three things at the same time. But these software-defined RF systems are thousands of potential things that you're trying to control to make a decision. And the other thing with traditional approaches that really is the biggest problem, is that they are very limited to known emitters, things that we have seen in the field, and we've recorded, and we've written down everything that we know about them. And that is just not true as soon as you are in an environment where things can change rapidly.

Ken Miller (03:13):
I wanted to go back. You mentioned it, you talked about complexity and how humans can maybe address a couple things, but obviously, with software-defined it's able to address many more issues, many more emitters at any given time. In a previous episode that we had on the show back in the fall, we were talking a little bit about AI, and we talked about the notion of this understanding of the spectrum. If you look at our timeline chart, it goes from a contested environment to a congested environment as more and more emitters moved in. They were to complex. But we talked about moving into this notion of chaotic environment. And I wanted to get your thoughts on that, because as we're talking about how humans can adapt to a particular environment, we can do a little bit of adaption when in complex environments, but we really do struggle even more when in chaotic environments. And how does the spectrum today look from a chaotic perspective, or is that a differentiation that is getting some attention in your field?

Karen Zaigh (04:16):
So chaos, at some level, is just that there's too many things going on for us to track. I mean, if you could model everything perfectly, we would know exactly what's going to happen. We can't do that. Even in the most amazing systems out there, we can't model everything. What is it they say? That all models are wrong, but some models are useful. So it's really a question of how far down that path we're going. And the better that your receivers and your situation understanding is, but in some sense, less chaotic it is. Today's environments are not probably what I would call chaotic, but extremely complex. That doesn't mean that next week it won't have achieved what everybody else would call chaotic.

Ken Miller (05:00):
So when we talk about electronic warfare then in these complex environments, it's made up of several different components, obviously electronic support, and then electronic protection, which is the resiliency piece. And then electronic attack, which is oftentimes synonymous when we talk EW, where a lot of us are talking mostly about EA, sometimes to the detriment of the other components. You have electromagnetic battle management, EMSO, so forth. Could you go into how artificial intelligence is addressing each of those different aspects of electronic support, protection, and attack?

Karen Zaigh (05:36):
Yeah, sure. In AI, we talk about the concepts of situation understanding, and then the decision making, choosing actions on how to interact with the environment. The concepts of machine learning come as a layer above the situation understanding and the decision making. When you move into the EW world, those situation assessments and decision making capabilities can flood and appear in all of the various components of an EW system. If we're talking about electronic support, that is most closely aligned with situation assessment, and you might do things like behavior characterization, automatically extracting the features of the RF environment. You might want to do classification, which is saying, "Okay, this is this type of wave form," or "This is that specific emitter." You may be wanting to pull in concepts like data fusion, where you're trying to pull in other information that is not strictly RF-based, or perhaps across multiple platforms.

Karen Zaigh (06:35):
As you're looking at anomaly detection, those are the unusual novel situations that you're encountering. Causal relationships. Can you detect what made an event happen? Maybe previous in time, maybe you're looking at, what are the subnets out there? If there are multiple emitters that you're tracking. And then intent recognition, what is the future intent? What's going to hack? For example, a big jammer may be a nasty, horrible jammer, but if it's not actually impacting your part of the environment, it doesn't matter. You don't need to do anything about it.

Karen Zaigh (07:10):
The concepts of electronic protect and electronic attack correlate most closely with the AI concepts of decision making. And there, you have things like planning, which is the longer term concepts of laying out what needs to be done and how you're going to achieve them. Optimization is where you start looking at choosing between your available alternatives to try and achieve multiple objectives with specific goals in mind, things that matter more, what matters more, what matters less. And then the scheduling, what are you going to do at the nitty gritty, lowest levels of the system, the actual transmit and receive?

Karen Zaigh (07:49):
As you move into slower time, the support and protect and attack all happen in the millisecond or faster timeframes. As you start moving into things that become more human relevant, and that's the battle management and or network management, depending on exactly what your mission is, that's where you start looking at the longer term planning issues, the ability to do mission management. And all of the factors, human factors, how do you interact with the human to accomplish the very high level mission goals that you're looking at?

Karen Zaigh (08:23):
There's a another field of AI that we talk about in knowledge management. We can record data, but until you're actually recording that data in such a way that you know why it was recorded, it's just a big bucket of numbers. And so being able to tag it appropriately so that you can look for the right information later, is what gives you the power to be able to build a machine learning system on top of a good underlying database.

Ken Miller (08:52):
Now, you talked all about a lot of these concepts then in your book, Cognitive Electronic Warfare: An Artificial Intelligence Approach, that you wrote with your co-author, Julia Andrusenko. I was wondering, what prompted you to write this book? And can you tell us a little bit about that thought process behind how and why you picked this topic and what you wanted to get across to the reader?

Karen Zaigh (09:14):
So probably a healthy dose of insanity, but fundamentally, I've been working in embedded artificial intelligence, the small devices, small compute, low communications, my whole career. I built mobile robots back doing my PhD. And I discovered the problem RF space about 15 ish years ago, and discovered that it was a really rich area with technical problems and challenges that I thought AI had a good opportunity to solve and help address those challenges. And what I've discovered over the last 15 years is that people have a lot of myths. And the media hasn't helped, of course. "You must have a lot of data in order to do machine learning." It was like, well, not really. You can do things like leverage the physics of the RF environment. You don't actually need to always do it from the raw data.

Karen Zaigh (10:09):
And in fact, in many EW domains, we don't have a lot of data. We may only have one or two examples before we have to actually immediately respond to something. The concept of evaluating a system that learns in the field, this is something that everybody tells me, "Yo, you can't do that. You can't evaluate a system that is handling novel situations." It's like, "Well, actually we can. There are some excellent approaches for doing it." And I've had, I don't know how many people have told me, "We will never feel the system that learns in the field. We will always pre-train it and then put it out in the field."

Karen Zaigh (10:45):
And it's like, again, we actually have systems that do learning in the field. And so limiting ourselves to not thinking about those as possibilities limits everything that we think about and limits what we can accomplish in that world. So really, basically, what I wanted to do was take everything that I talk about in one-on-ones or small groups in the various environments I work in, and say, "Okay, this is what I've learned. And maybe we can sell these things together and not have to always address the same problem over and over again."

Ken Miller (11:22):
We appreciate that, because you've obviously been in electronic warfare for a couple decades. And as we've tracked some of these problems, we are constantly dealing with the same problem, it seems, over and over again. And over in recent years, I feel like we've been making some progress. And a lot of that is related to this notion of that AI is bringing to the table. I like what you said. I want to go back a little bit about this notion that, when we talk about AI, a lot of the conversation goes into the amount of data that's out there.

Ken Miller (11:53):
And you mentioned earlier that you don't necessarily need a lot of data. You need to be able to use the data properly. And that's an important distinction that I want to go back to a little bit, because when we're looking at missions out in the field and in a particular environment, it's one thing, how tempting is it for a system to focus too much on trying to collect all the data instead of trying to collect the right data and deciphering what it needs to focus on, and then processing that fast enough to the decision maker?

Karen Zaigh (12:26):
Well, that's an interesting question. Because in the field, at some level, you are collecting all the data anyway. And if I had infinite space on my device, which we don't, I would record everything and I would make absolutely sure I was tagging it with everything that's out there. But we don't have infinite space to record everything. And so we have to figure out what it is exactly that we're trying to record. And quite frankly, I would rather... One example of three different things, than a million of examples of one thing. That diversity gives me far more power to do the right thing next time round. Now, if you think about your toddler, if they are sitting in their high chair throwing toys on the ground, they throw all kinds of different toys on the ground because they're doing gravity experiment, number 17, gravity experiment number 423. And each time they throw a different toy on the ground, they learn something different.

Ken Miller (13:22):
So can we talk a little bit about the approach artificial intelligence brings to the table between the collection side then and the processing and distribution side? Because one of the challenges, particularly in the military, is the division between responsibilities. Some of it's Title 10, Title 50, some intel to operations, you're collecting on one side the ES side of the equation. Oftentimes, some of that data that's collected there falls into a different bucket. And then therefore, can't be shared in an efficient way. Does some of our structures and organization of how we use the data limit the possibilities that AI can bring to the table in terms of how quickly we can sift through and make decisions on the environment?

Karen Zaigh (14:13):
Yes. I mean, to the extent that we have the policies in place that don't allow us to share the data, that is absolutely limiting, it's no different from healthcare data. You don't want your medical history shared with everybody just because it would be scientifically interesting. I think the bigger problem is less policies than bad habits. And here's the simple, let's say we're talking about an MDF, a mission data file, that matches the current RF environment very, very well. And we know exactly how to respond to a particular threat. What the system will do typically is respond to the threat and not tell an AI that's working in the background about what it's seeing. So once that threat has been dealt with, now, your MDF is in a situation where it is getting data that it knows nothing about.

Karen Zaigh (15:07):
There is RF data. There is something, but it doesn't know what it is. The traditional system doesn't know what it is. So it starts feeding that data to the AI. And it's up to the AI to figure out what to do in the novel setting. The problem is is that the AI hasn't even been able to watch the history, the minutes leading up to that threat, being dealt with, for the AI to know that there was stuff going on in the background. If the AI had the ability to track the low unknown signals over the historical timeframe, it can potentially do a much better job at knowing how to deal with that novel situation, even if it wasn't actually responding to it. And that is something that is, when I say a bad habit, it is how these legacy systems are often architected. And it makes it difficult for an AI to operate in these complex environments, where there are many, many things going on.

Ken Miller (16:05):
In your book, you use a running example of the BBN strategy optimizer. Can you expand on that example and explain why you use this specific example in the book and what does it mean?

Karen Zaigh (16:17):
So I used to work for a company called BBN, Bolt Beranek and Newman, out of Boston. And it's now a wholly-owned subsidiary of Raytheon. We built, in cooperation with BAE, we built a system that does communications electronic protect. The BAE people did all of the feature computation. And I, at BBN, did the decision making to choose how to mitigate interference of different types. I used this example for a variety of reasons. Over the phases of the program, we went from blue sky, crazy ideas, "Let's see if this silly thing will work," all the way through actually fielding the system.

Karen Zaigh (17:00):
And so I was able to test technological ideas, novel ideas, do research, write research papers, and then actually see what happens when you put it on a small embedded device that has no compute, no power, no memory, a legacy 1998 hardware. And how does it work when you're writing something that needs to operate in millisecond timeframe? So I really got the whole perspective of how does the system work in a setting that is real and realistic in such a way that I could talk about it.

Ken Miller (17:38):
So one of the things that comes out in your book, and I never really thought about the difference of some of these components, because like many, I just lump everything under cognitive EW. And I don't think about some of the finer difference points in it. But you talk about the difference between adaptive, cognitive, and aggregate incremental learning. And I was wondering if you go in and discuss how... And I guess some of what you learned was that aggregate incremental learning is more op than or efficient than some of the other methods. So could you go in and could you talk a little bit about the difference between adaptive, cognitive and aggregate incremental learning?

Karen Zaigh (18:21):
So an adaptive system is something that changes based on what it's observing. The thermostat in your house turns the furnace on or off, depending on whether it has s accomplished the temperature goals that it is trying to do. So that is an adaptive system that is responding to the temperature in your house. Until you actually layer it with being able to modify the function that changes the decisions, you are not in a cognitive system. You are in a simple control loop. And when we were talking about it in the EW environment, you can put together situations where you may have trained on a very large number of different emitters, and know how to respond to them, and how to maintain your performance over time. But as soon as you're out in the field, you are now in a setting where there are novel emitters.

Karen Zaigh (19:12):
And so an adaptive system is unable to change its knowledge base in such a way that allows it to handle those novel emitters. It can only do the best it can, based on what it learned previous. Meanwhile, a cognitive system has the ability to update the way it is thinking about the problem. It can update those internal models so that it can respond to the novel situations. What ends up happening is that if we are in a setting where you were unable, for some reason, to train on any of the data that you actually expect to encounter in the field, an adaptive system can't do anything. It's like a watch that's broken. It is correct twice a day. And a cognitive system has the ability to say, "Oh, gee. Here is what I've just encountered. These are the things I tried against it. This is what worked well. This is stuff that didn't work at all. And here's some stuff that worked."

Karen Zaigh (20:05):
And be able to take that, so that over the course of time, maybe the first thing it does, it has is no way to mitigate a jam. But then over time, it starts learning how to say, "Okay, well I can still get data through by taking these responses and learning that so that it can do that over time." And so what you end up with is a system that can accomplish something very close to optimal. And by optimal, I mean the best it could do if it knew every single emitter that it would ever encounter.

Ken Miller (20:39):
And so then with aggregate incremental learning, is that a step beyond the cognitive approach, or how does that piece of the puzzle fit into the?

Karen Zaigh (20:52):
So incremental learning is you take a learned model and you add layers on top of it as you are encountering more information. When I talk about the aggregate learning, I am talking about it over a mix of scenarios and mix of settings. So it is simply a method, a metric by which we define cognitive. Because things can be not very cognitive and they can be quite cognitive. An MDF, if you're doing simple one-to-one exact match of an MDF, a threat into the MDF, that's lookup, and it's 0% cognitive. But if you start fusing your MDF lookup and you're allowing some similarity metric that isn't identical, then you're moving into a level one cognitive. It's really simple, but it starts moving in the right direction. And by the time that you're in a very soft fuzzy, you can do the hybrid, and recognize that these capabilities work under those settings and the other things work somewhere else, and you mix and match, that's where you're in a situation where you're fully cognitive.

Ken Miller (22:00):
A lot of times, we talk about training in realistic environments. And in a episode that we're working on in our sister podcast, History Crows, we were talking about the ELQ 99 support jammer on that is now on the EA-18G Growler. And one of the stories that came out is basically, at different points in time, you develop a system that looks to be able to do all these great things, and then you put it out in the field and it tactically, there's certain elements that are just basically irrelevant and you can't use it in the real world environment.

Ken Miller (22:33):
So relating that to how we're going about with AI, obviously when you design a system, and develop it, and you test it in a controlled environment, then put it out in the field, do we have the right approach to testing, first of all, both in the lab, through onto field, that we can really maximize some of the benefits of AI in our systems? Or what do we need to do to change so that we don't get into a situation where we're putting, and not that it happens a lot, I want to be clear, but we don't want to a situation where we're putting technically irrelevant capabilities in the field because we haven't tested them properly in realistic environments.

Karen Zaigh (23:09):
Yeah. So in the utopic world where everything goes exactly the way I want, I want one test environment that can use data replay from recordings' actual missions. It can use modeling and simulation at whatever level of fidelity of the modeling and simulation. And it can put hardware in the loop, all in one box. And so, for example, if we imagine the scenario where those drones were flying over Gatwick Airport, could we, back in the lab, mimic that? So maybe we fly some drones out in the backyard, but all the modeling and SIM is doing the stuff with the background communications and radar traffic that is happening at that airfield. So that's where I would like to be. And we are very far away from that. We do not have the ability to generate wave forms on the fly, which if you're a and hacker, you could be doing whatever you want with your wave forms.

Karen Zaigh (24:07):
The biggest challenge that we have right now is the assumption that we can record data and then play it back. And you'll see this in every program that's out there. And assuming you can record data at all/ but we record it and then call that done. That's fine for situation assessment. I can tell you if I have recognized the emitters in the field. It is not sufficient for an EQ system where you're operating in an adversarial environment and taking actions because every single time you take an action, you change the way the data should look.

Karen Zaigh (24:43):
So if you are responding to a jammer, suddenly, all the recorded data that you had is no longer relevant. And we don't do that closed loop, that closed loop step is extremely important. It's utterly crucial. And we don't do anything enough of it. And are a variety of interesting test systems out there, but they certainly don't stitch it up and down all of the capabilities and they don't accomplish the levels of fidelity that we need. We really need to be testing at all of those levels, thousands of experiments at low fidelity and handfuls of experiments at, at high fidelity. And we have nothing like that right now.

Ken Miller (25:21):
And what do we need to change in the short term? What are some short term steps though to at least get on that path so that we know we're at least trying to tackle the problem correctly.

Karen Zaigh (25:34):
So certainly, within comms, we have many of those component pieces. And so starting to pull them together and allow that capability and recognize that we can do communications wave forms in a coast loop setting. Those are there, the pieces are there. And starting to pull that together is the important thing. In radar, that's a much more complicated environment from the modeling and SIM perspective.

Karen Zaigh (26:01):
And the assumption, certainly, in most of the military settings that I've worked with, the hardware is extremely expensive. You can't expect an academic for example, to buy some of these systems to do a test. And even if they can buy one, one is not enough. You need 10 or 1000. So the steps there are starting to look at, might be for example, creating models for simple radar wave forms that could operate on a software defined radio. So it's a $2,000 piece of hardware. It is doing something that looks like a radar wave form, even though it does not accomplish even the complexity of what you might see in a weather [inaudible 00:26:46] station, for example.

Ken Miller (26:47):
So with AI, we oftentimes talk about, instead of having a man in the loop, we have a man on the loop, involved in some capacity, overseeing what's going on. And that gets into the training piece. Do you have any thoughts on how we are training to use AI technology in the field? How difficult is it to keep up with the training so that we know exactly how to use this new technology or evolve this technology in the field? Are we training appropriately? Do we need to embed war fighters earlier into the design and development process so that they understand the technology a little bit better before they get their hands on it in a real world environment? What are some of your thoughts on the training element involved in this?

Karen Zaigh (27:34):
So that I would have two things to say here. And the first is that we already use AI everywhere in our lives. Your cell phone has more AI on it than you could possibly imagine. Fly-by-wire aircraft. That's all AI. We know how to do these systems and get users to be able to handle it. As soon as we say, "Oh, it's got AI." Suddenly people become afraid of it because it's something that they don't understand. So that's the first piece of it. The second piece of it, I think you raised the point directly. Bringing people, the end users, into the system, into the conversations earlier is an incredibly important step.

Karen Zaigh (28:13):
Recognizing what their capabilities are, what can they do, where can they add value? Because the AI is operating down in that sub millisecond timeframe. And it isn't going to have the full picture that a human might. So where and how can you bring that human into the loop in such a way that it is benefiting both sides? It not only makes your system better, but it also allows the human users to understand where the weaknesses of the system are, which of course makes the system better because then it can come back into the design. But it helps them understand where and how to use it and when it will be most effective in the field.

Ken Miller (28:55):
You mentioned how basically, we use AI in our everyday lives. And so often times, we don't even know we're really using it. It is just commonplace now. In many way now., It's forcing us to rethink our underlying assumptions about the particular environments we're in, both from a personal commercial side, but also military side. And AI and machine learning are basically, they don't have to simplify the world to a level that humans can understand. They're operating at that millisecond, that deeper level, making sense of the, of data that's out there. And so this opens up a lot of possibilities.

Ken Miller (29:29):
But as you mentioned, sometimes you mention AI machine learning, deep learning, and contextualizing data. It can freeze people. How do we prevent advances in AI and machine learning from paralyzing us to a point where we're not really able to optimize the use of this technology, particularly in military environment, but also in general society? You think about self-driving cars and stuff. All of a sudden you start to think about this, and all of a sudden, it becomes so overwhelming that you almost want to peel back from that because you want to go further than what you understand the technology can do.

Karen Zaigh (30:04):
Self-driving cars are a great example. People seem to be afraid of them. And I honestly can't understand why that self-driving cars have much lower accident rates than human beings do. But the fact that the system can't explain why they did something stupid, your teenager might be able to. Maybe. But the fact that we feel like we can interrogate the teenager, whereas we don't feel like we can interrogate the AI in the same way has certainly caused hesitancy. And this is true across the board. we fear what we do not understand. But that said we have all learned how to use the brakes and the gas pedal in the car. And once upon a time, that was terrifying to people as well. So we will get there. We just have to continue taking steps. I find it fascinating that I think it's the European aviation administration has a set of rules that say, "If it's AI, then you have to tell the person. But if you don't call it AI, then you don't." That's like, "But, but, but what is AI exactly?" I mean, it's essentially enhanced mathematics that can do approximations in a way that traditional mathematics or control theory can't.

Ken Miller (31:19):
So to wrap this up, I wanted to just go back to your book. And you mentioned at the beginning, maybe a little bit of insanity brought you into writing the book in the first place. And of course, insanity is about doing the same thing over again. So you're probably going write a second book on this topic here in next few years, what are some of the recommendations or conclusions really out of that book, that either are driving your thinking today, or may influence the writing of a second book in the near future that you want the listeners to understand about this topic?

Karen Zaigh (31:54):
Well, the snarky answer is I wrote a book before, and so I did I false pray to the insanity, doing the stupid thing twice. I think to the extent that we can start addressing the small problems and building up the solutions, I don't want to have to be addressing the same myths again. I want those myths to be gone by the next time that this comes round.

Ken Miller (32:22):
That will conclude this episode of From the Crows Nest. I again want to thank my guest, Dr. Karen Zita Haigh. Just a reminder that we are conducting a survey of our listeners, and we want to hear your thoughts. Now, please take a couple of minutes to give us your feedback. You can find a survey on our website at crows.org/podcast and in the show notes. Thank you for listening.

Creators and Guests

Ken Miller
Host
Ken Miller
AOC Director of Advocacy & Outreach, Host of @AOCrows From the Crows' Nest Podcast
Cognitive Electronic Warfare
Broadcast by