How is AI Changing EW?
Ken Miller [00:00:09]:
Welcome to from the Crow's Nest, a podcast on Electromagnetic Spectrum Operations, or emso. I'm your host, Ken Miller, Director of Advocacy and Outreach for the association of Old Crows. You can follow me on LinkedIn or you can email me at hostromthecrowsnest.org thanks for listening. In this episode, I am pleased to have on the show Rob Hyland. He is a Principal Scientist and Director of Transition at Charles River Analytics. We're going to discuss artificial intelligence innovation and how we need to think outside the box if we are going to successfully address the many challenges in this area moving forward before we get to my conversation with him, just a couple quick notes. First is that if you haven't yet, please take some time to check out season one of our from the Crow's Nest CTO series, where we deep dive through six episodes into Next Generation EW for air superiority. Season one is powered by L3Harris.
Ken Miller [00:01:03]:
If you or your company or your agency is interested in sponsoring a future season, please let me know. We're hoping to have at least two more seasons out before the end of this year, but it's an ongoing project and we're always looking for more seasons, so please don't hesitate to email me to learn more. Also, next month is, as many of you know, Women's History Month, and each year we typically try to do something through the podcast to recognize the contributions of women women in ew. This year's no different, so all of our episodes in the month of March are going to focus on the contributions of women in our field, both past, present and future. We start next week with our subscription AOC Members Only edition. I am pleased to have with me Lori Buckout, who is a retired US Army Colonel for former Head Founder and head of Corvis. She is a Congressional candidate in this most recent election from North Carolina First District. She's just narrowly lost to the incumbent.
Ken Miller [00:02:06]:
She's one of the smartest people you'll ever talk to. Great, great person, good friend, and quite frankly one of the so influential in terms of where we are at today in aw, especially as it pertains to the Army. So it'll be a great conversation. However, you can only listen to it if you are an AOC member or you are a subscriber. So if you have not already subscribed, please consider doing that. It's only $2.99 a month. You get all those additional subscription episodes and you get to participate, if you want, into the live virtual audience when we record these episodes. Which means you get a chance to not only listen up front and listen to the raw cut of it, but also get to ask questions, engage the guests, provide topics.
Ken Miller [00:02:48]:
It's really a great opportunity just to engage experts in this field and learn more and all are welcome. You don't have to be a senior leader or engineer or anything if you just have general interest in ew. Our conversations are for you, so please take a moment to do that. It'll be a great series, a great month of some new guests coming on the show and talking about a topic that needs to get more attention. With that, I'd like to welcome Rob Hyland from the Crow's Nest again. Rob is the principal scientist and Director of Transition at Charles River Analytics. Rob, I greatly appreciate you taking time to join me here on from the crist. Thanks for joining me, Ken.
Rob Hyland [00:03:26]:
It's so great to talk with you.
Ken Miller [00:03:27]:
So just by way of background, you know, we had a AOC 2024 last month or actually in in December and interacted with some folks from Charles river, presented at AOC 2024 I guess on rapid reprogramming. Thought was a topic that we really needed to kind of explore a little bit more on the podcast, talk a little bit about artificial intelligence, machine learning and the role that that's playing in advancing next generation system. So we unfortunately my schedule changed during the convention AOC 20 and I was unable to get any recording so we reached out and I really appreciate you taking additional time to come join us here today to have a more in depth conversation on this topic. So thank you for that. Just to begin a little bit for our listeners, tell us a little bit about yourself and Charles river analytics and the role that you're playing in the AI machine learning space.
Rob Hyland [00:04:22]:
Yeah, first of all, thank you so much for having me on and I'm really excited to talk about AI in the electromagnetic spectrum. So yeah, a little bit about my background. I have been researching and developing and deploying systems in artificial intelligence for 30 years now and went to school, get some advanced degrees before that. So it's something I've been involved in kind of with the whole arc of AI. I shouldn't say the whole it does go back even to the 50s, but Charles river analytics is a particular place. I've been here six years now and I've known them pretty much my whole career. Charles river started in 1983. We're based in Cambridge, Massachusetts, got about 140 employees and we focus in on signal processing, sensing, perception, applied robotics, and that's particularly relevant to our conversation today, but also in doing deep AI algorithm development with the human at the center.
Rob Hyland [00:05:24]:
So we have a service organization called Human Centered AI. And then the last kind of group of research that we have is under User Experience Innovation and that is putting the user at the center. It's great that we have algorithms and analytics to help us, but if it's not in decision cycle, it's less effective.
Ken Miller [00:05:47]:
So in our conversations here from the crow's nest, you know, we obviously focus on electromagnetic spectrum operations and where, where this field is going. You know, I think if you asked our community, you know, what are some of the drivers, you'll get a lot of common answers. Obviously from the threat side, it's, it's China and how, and, and, and the ongoing fight with China. Whether you, whether you want to say we're already at war or not. That's, that's your, it's an opinion. But I certainly, it's the, the threat driver. And then on the technology side, I think, you know, most everyone will point to where we are going with artificial intelligence and machine learning as really the, the technical driver of where our systems are going. And so I, I think, you know, Charles river is, and the role that you play is you're, you're bringing all that together into, into your work and into this space.
Ken Miller [00:06:39]:
And so just to get. Sorry, you mentioned in your introduction a little bit. You know, AI has been around for a long time, for decades really. You know, the, the, the, at least the experimentation of it. Most of the conversation about deploying systems with AI have come in the last 10 years. We, it's kind of ramped up a lot. So could you share a little bit, just briefly about where we've come and what is really kind of the driver to bring all the work that's being done on artificial intelligence and machine learning out into the open in this field.
Rob Hyland [00:07:14]:
I think you framed it exactly well or exactly right, that written very well that there is both a tech driver and you know, sort of a great power threat driver as well. So first on the tech driver, there have been surprises in the last, well, even few months with deep seq R1. That's all getting settled right now. But also in the breakthroughs in the deep neural networks in the large language models. But I'm going to sort of frame this for the audience that this has been a larger arc, obviously and when you have big advancements in technology, they often have a 50 to even near 100 year arc. I'm not going to go all the way back there. But something that I like to think about, we do a lot of research and development for all the service agencies. And I'll use a particular framework that DARPA likes to use, which is the three waves of AI, and that goes from the sort of handcrafted ways to describe and reason about a system to statistical approaches is got very popular in the second wave and now there's this third wave of AI which is kind of bringing both of those two together is kind of the simple way.
Rob Hyland [00:08:31]:
But DARPA likes to talk about contextual adaptation. And I mentioned that term because this is relevant to MSO and EMS and the processing pipelines for rapid reprogramming.
Ken Miller [00:08:44]:
And I would imagine with this arc that you, you have the Moore's Law. I don't know if you have Moore's Law applying to it, where it's not just coming together, but coming together exponentially faster each and every step. Is, is that accurate or how, how does, how does that kind of play in with, you know, trying to keep up with it? Because as we're going along, it's just development's getting faster and faster and, and, and the demands for what we have to find out and learn are much more significant.
Rob Hyland [00:09:12]:
Oh, this is wonderful. And it allows me to kind of put this in an easy to understand but also a technical context. So in the beginning of AI, and especially it was sort of materializing into real applications in the 80s, 90s and 2000s, there were kind of two camps of AI. It was called symbolic AI and connectionistic AI or statistics. And I am quipped that we do not have the fundamentals, the raw physics of AI even pinned down for either of them. So it was kind of just throwing spaghetti at the wall. However, on one of them, it was particularly well suited and would accelerate with some obscene amount of data and obscene amount of processing power. So that was connectionistic.
Rob Hyland [00:09:56]:
And so you know what happened? Actually, I'll just go back a little bit. It used to be laughable, like in the 90s, that you'd have a neural net and it took 100,000 pictures of a cat to learn and it would take five years to compute. Well, who knew that we would have tens of thousands of pictures of cats and so forth. And then of course, it was basically the same mathematics for the deep neural nets once, and I'm oversimplifying a bit, but once placed with that amount of data, and is your point about Moore's law and geographical processing units really just carried that forward so that even though we really don't have the fundamentals of the theoretical underpinnings, which I thought was going to be important before we hit this big wave of performance, turned out that just having enough data and enough processing power has launched us into a real revolution.
Ken Miller [00:10:54]:
I want to pull that thread a little bit because that's really interesting. Do you view that we have to catch up in the theoretical side of the equation? Or is trying to keep up with just the raw data enough of a driver in terms of getting to the system or the capability that we need on the other side? Are we losing, maybe not losing ground, but do we have ground to make up? Because the theoretical side is oftentimes more tied to the human understanding versus machine learning versus the technical side, where it's just about how you compute, how fast you can compute.
Rob Hyland [00:11:34]:
I got you. I think all innovation comes from a theoretical basis. If you look at physics and everything we can do in materials and so forth, if we did not have the theory and we were just like trial and error experimentation, it would not have gone into the industrial revolution like it has. And even space, right. You know, astrodynamics, everything was worked out, you know, decades before, so it is important to have that. But also I just think raw experimentation with Ukraine and so forth and a real necessity does push us. So I'll say both. And I'm happy to say, you know, the US science and technology investment has been cognizant of that.
Rob Hyland [00:12:20]:
So I'll actually, this is a great framing for. I mentioned the three waves of AI. So we have essentially the symbolic based approaches were kind of the first wave connectionists and statistical second wave. And the third wave is putting together. Really what we're seeing is a result of massive commercial investment in the second wave. So DARPA has a program, it's got different names, but when this three ways of AI was being framed, it was called AI. Next it's how do we get over when this great LLMs and deep neural nets kind of hit a wall of various sorts? DARPA is already investing in what we need to get that theoretical foundation, but also understand what works and doesn't work, even from a cyber standpoint, from computing on the edge. So.
Rob Hyland [00:13:17]:
So yeah, the answer is both. And we are being funded to, to, to work both very practical to very theoretical.
Ken Miller [00:13:24]:
Now we. When you talk about the role that experimentation plays, obviously it's always played a role. Whenever you're talking about military technology, dating back whenever, you know, however long ago, experimentation though I think nowadays I've said this on the show before, you know, we talk when we talk about military training. You want to train like you fight. And if you look over at Ukraine and even in China, you know, you, there's this element of that you're training by fighting. Not just the fight that you want to win, but you're actually, you're, you're. It's an act of training because every operation is going to lead to some other consequence that you have to be ready for. How has experimentation as it pertains to this field adjusted over the recent years with this final phase? Because it almost feels like we, the, the, the, the time from experimentation to field a system has gotten a lot shorter.
Ken Miller [00:14:22]:
So. And it really never stops even with the, once the system is fielded, you're still training and experimenting with the systems you have in the field because the, the opportunities are so great. So talk a little bit about how experimentation, the mindset of experimentation has, has changed recently with this field.
Rob Hyland [00:14:40]:
Yeah, yeah, we have some problems there in that the larger compute, the larger models, you know, everything's getting bigger and bigger and bigger. So to that degree, it's put the keys in fewer hands. Just right now there's going to be pendulum, it's changing and so forth. So in terms of generating the foundation models, which is kind of the basic things, only a few companies can do that and certain things get baked into it. Now the good news is it's also democratized things so way more people are now able to use AI even without programming. So there's challenges, but I think the experimentation is going faster and we are seeing uses of AI that have kind of blown my mind and I've been in this a while.
Ken Miller [00:15:28]:
Well, and I think when you think of like the data out there, we think of your, your mind goes instantly. You mentioned earlier, like pictures of cats or pictures of people walking and whatever the, the surveillance is. But correct me if I'm wrong, but when you're dealing, when you, when we talk data, it really in, in this field, we're really going into the, each of those pictures, into the, the pixels and the, the, the data embedded in each of those images. So when we talk large data sets, it's not just like lots of pictures or anything like that. It's like it's dealing with really minute pieces of data packets. So talk a little bit about the challenges of getting the technology to really collect and understand the data that it is, it is absorbing and collecting and how hard it is to really get a clear understanding of an environment that you're interested in for operational success.
Rob Hyland [00:16:28]:
Yeah, excellent. And the sort of MSO domain is a great example of where we're able to apply I think quite effectively a lot of these breakthroughs in symbolic and connectionist. And then now in this hybrid AI, to use the three phrases or three phases there, or waves, but we're hitting some key challenges and one of those things is being able to field things faster. And then another is the availability of data and the time it takes to train. And then finally once you have the system and it's doing something just to kind of keep in that second wave of AI with connectionistic, you don't know what it learned and didn't learn. And you can't just go there and go fix the bug or make it do something different. It's a very specialized set of skills in order to retrain the model and to really get understanding what it does. And so one of the problems or one of the names for those types of things is explainable AI.
Rob Hyland [00:17:39]:
And another challenge is competency aware AI. And you'll see that AIs will very confidently give you the wrong answer and not necessarily no. And again, every time I say there's a challenge that oh geez, this is a real limiter, we're being funded and we in the larger SST to knock those down. But those are a couple.
Ken Miller [00:18:03]:
That's what drives technology, is addressing challenges. So you're going to run into that. You have obviously, whether your personal, your professional track record as well as Charles river, you're, you're engaged in this on a daily basis. Share with it. What are some of the opportunities based on some of the tests, experiments, analysis that you run, that you run in your position? What are some of the opportunities in AI machine learning that you're tracking, you know, kind of moving forward here that that kind of keep you up at night or just give drive you to excel.
Rob Hyland [00:18:39]:
3 Come to my mind, the first two are really about this hybrid AI or bringing the symbolic together with the connectionistic neural nets. So the first one is directly applying this hybrid AI that brings those together to the EMS processing pipeline. The nirvana would be that we're able to do the contextual adaptation that I mentioned before. So the system can be learning entire parts of the domain, entire operations of the pipeline, and be learning entire characterizations of what is going on from complex emitters, either our own or others. And so that's a really hard problem that just connectionistic, that second wave of AI is just not going to do on its own. So one particular approach, just to throw this out to your community, that people may not have heard, heard of is probabilistic programming languages. So what is a probabilistic programming language but a ppl, I'll just say is a programming language that has a first class data structure for probabilities. So you can just basically have one line of code that says go do a Gibbs sample and tell me what's the most likely next state there.
Ken Miller [00:19:57]:
What is a Gibbs sample?
Rob Hyland [00:19:59]:
It's like a normal distribution sample from a distribution. So you could just say, hey, I've seen, here's a signal hopping. It's learned something about signal hopping and channel. And so it only has to see some of that and then it makes a probabilistic representation of what's likely next. And so it's got that sort of continuous dynamic learning. And then if it sees it again and it doesn't have the full thing, it could say, well, I saw that part of it and I predicted that if it's this exact Ford F150 that my neighbor owns, then that it will do this thing next. And that's very powerful. So that just built into neural nets and probabilistic reasoning.
Rob Hyland [00:20:45]:
So what does ppls or probabilistic programming languages bring to the table? Well, what that allows you to do is take the software engineering and rigorous principles like encapsulation, separation concerns and disentangling models from solvers and actually write a program. So you can use the full facility of everything we learned about programming, but bring in neural nets and all these probability distributions. To give you a little example is that these neural nets tend to be a very large entangled thing and there's more building blocks happening and so forth. But it takes years actually to develop these, especially if you don't have the data and so forth. You're trying to get to that. So one example I'd like to throw out is there was this United nations machine learning system for seismic modeling, all right? They spent a hundred million dollars over several years to develop what was it like, let's say 25,000 lines of code. And so what they did is they reframed this. This net Vista was implemented in a probabilistic programming language with 25 lines of code.
Rob Hyland [00:21:58]:
So here you have the ability. PPLs allowed you to build smaller and more easily maintained machine learning systems and more quickly.
Ken Miller [00:22:07]:
And I would imagine that that also opens the door for faster upgrades or reprogramming anything, any changes that need to be taken place. Because you're not dealing with thousands of lines of code versus just dozens.
Rob Hyland [00:22:22]:
Yes, you say, how could you do that. But that's what you get when you separate the models from the solvers and you just have a programming language that's a programming language good. And you know a model for or you know, the ability to learn. So yeah, so you reduce what needs to go through, you know, the authority to operate. The security accreditation is a much, much smaller, you know, piece of code and you don't have to keep. Sorry I mentioned about the models and the solvers. So solver is kind of like an engine and it just, you have a whole bunch of them that just apply to a lot of sort of mathematic situations and you know where you employ it. So you only need to get an ATO on that once.
Rob Hyland [00:23:05]:
And then the model I'm writing and I use code to write and express things and I can get subject matter expertise in there and I can go in when something's not operating. I can just make code changes. There's still no silver bullet. These, these still very complicated mathematical things. But it makes it easier to evolve, more straightforward and of course, you know, just fewer lines of code is just easier to maintain.
Ken Miller [00:23:31]:
What effect do some of these advances have on kind of our, our ability to for mil on military planning, particularly when we get into joint operations or multi domain operations, JAD C two things of that nature. Because you mentioned earlier, you know you can, you can look at a particular domain of where you're getting the data. But, and, and maybe you didn't mean a functional domain that we like the way that we have, you know, airland, C space, cyberspace and so forth. But data is ubiquitous. But our military is structured, you know, based on domain, you know, air, land, sea, space, outer space. So how have advances in AI machine learning made military planning for multi domain operations? Getting people to work together in, to learn from data that's maybe outside of their domain that maybe otherwise decades ago would have not influenced a particular services or domain's capabilities, but today they are because data's everywhere.
Rob Hyland [00:24:26]:
Yeah. Oh, this is a great. This actually this aligns with the sort of second promising result type of thing. EMS touches everything, right. And so when you're thinking about how to plan, you know, think about all the different JADC2, you know, from space to air to surface, subsurface land and all of them. So you have to learn how to, how that all works together. And right now our simulations are all very segmented. So if you want to really plan out either at the campaign level or, or even down at the fleet level or mission level or you know, to Use army brigade and below, there's still everything's connected.
Rob Hyland [00:25:09]:
So from a technical standpoint, that is a wicked problem to have that many interactive interacting elements. So we have an interesting result in deep reinforcement learning, an extension to deep reinforcement learning that uses hybrid AI. And I'm going to just stick to a paper that just came out in December for us which is about using this hybrid AI, which, which is of course everything in these, this computer science domain has a couple of different names. So we use something called neural symbolic methods that allow us to do course of action generation. And what's a breakthrough here is the course of action generation simultaneously solves for those multiple physical domains from space, air surface, subsurface land and the electric magnetic spectrums, the communications jamming and so forth. Now we did this in a more game environment as the paper that was at ICEC 2024 in the early December talks about. But the results are very suggestive of being able to break through that wicked problem. So if you don't know how you're going to stand up against an ad hoc and decentralized novel use of equipment, then you're really not prepared for fight.
Rob Hyland [00:26:44]:
And up until now that's something that everyone did in the silos. But as we all know, the spectrum and the use of spectrum are all have interacting effects. And anytime you have something that has interacting effects, it is mathematically gets even. Even with infinite computing, it gets intractable.
Ken Miller [00:27:01]:
So, so in, in a recent, relatively recent episode, I can't remember when it was, or sometime last year, we were talking a little bit about AI and, and the, the challenge came up. How do you account for a commander's intent? In as AI and machine learning provide you the analysis, it oftentimes will, will butt up against a commander's intent because there's really, it's really hard to put that into an algorithm to, to account for that. So, and there's been some progress in that area and I would imagine that progress is seen in, in this space, particularly with JADC2 and, and trying to get help us to develop operational plans in multiple domains. Can you talk a little bit about that, how that's changed recently?
Rob Hyland [00:27:46]:
Yeah. So just quick answer is that's hard, but it's, it's, there's some progress, but just really understanding and modeling, you know, when you, you know, like gray zone simulations and so forth, that's hard.
Ken Miller [00:28:01]:
Let me ask you in another way then too, and then I'll, then I won't, I'll stop interrupting you. But we talk about democratizing W. It's getting out at the lowest levels. Is commander's intent even as important as it was before? If, if we're equipping down at the, at, at the war fighter level at as low as we can go with the, the, the tools for planning and, you know, AI evaluation analysis and so forth, has commander's intent maybe is it as important or more important or what role does that play now? How, how's that been affected?
Rob Hyland [00:28:36]:
Yeah, so I'll say with the deep reinforcement learning, there's a really interesting thing that it can do. Very novel applications of tactics and strategies. And I thought that we're going to have a real commander's intent problem with this, that just no human would do these types of things. And sure, there's a way to win, but if it's kind of just implausible, like a suicide mission or, or something else, it's just that you're not going to ask, you know, 600 souls from various destroyers to go and go into this carrier task force, but the AI might do that. So I thought that was going to be a big problem. But it turns out that with modeling, so the biggest answer is from a technical standpoint is just have to have the right models and the right representations at the right levels of the architecture. That's kind of what we ended up with. So putting enough like guardrails on what we think would be reasonable ttps, you know, at the low level, it actually aggregated well, which in my experience has not been the case most of the time, you know, like weather, if you tried to do it from the bottoms up and not use ensemble methods, you get silly things that happen that doesn't what, it doesn't reflect what the weather really does.
Rob Hyland [00:29:49]:
But in this case, the commander's intent did was cohesive. And so the answer is we just modeled it enough that it seemed to matter. But also it came up the AI agent that we used for this new invention that we came up with over deep reinforcement learning called neural program policies and PPs, just like PPLs and so forth. There's a lot of P's, just what.
Ken Miller [00:30:15]:
We need more acronyms to confuse us here.
Rob Hyland [00:30:18]:
So it did, it did actually work. And so I think it's a little less important to get it exactly right to what, you know, that person on the other side of the, you know, threat equation is thinking. But you got to get it close, so it's not absurd.
Ken Miller [00:30:35]:
So let's, let's talk a little bit about, you know, the, the threat side of the equation and kind of the challenge that, because I was talking with someone, you know, we traditionally organize our military to fight two simultaneous wars on either side. You know, and so we're a lot of the conversations about Russia and China and. But when you really get down to it and you know, so much of where we need to go as a military and, and where we need to go in terms of our national security is, is looking out toward the Indo Pacom region and, and China not, not to just, you know, obviously Russia is still a threat and it's, we're learning new things every day by our involvement in the Ukraine war and our, and, and so that is one side, but really I think it's into PAYCOM is where we are facing some of the biggest challenges. So talk a little bit about. You mentioned at the beginning, before we, before we started that, you know, it's a knife fight you're bringing with, with, with China. It's, it's not a traditional exchange of salvo type of weapons versus weapons. It's, it's, it's a lot more subtle than that. Talk a little bit about the challenges into PAYCOM brings to the table and how that's affected this field.
Rob Hyland [00:31:55]:
We need to think outside the box in, in a number of different ways. When you're thinking about mso, you know, it's not just our, our hardware and our boxes and so forth, but it's the whole decision chain. And so there's the kind of, you know, a lot of times a fusion is happening in the cockpit of an F16 when everybody's really seeing everything and the whole electric magnetic spectrum and it's a wetware and the decisions that our, you know, pilots and war fighters are making. So one thing that's going to another kind of important thing is being able to visualize all the elements and so forth. So sometimes you do that. I sort of said another sort of thinking outside the box is not just, you know that it's that doing your planning ahead of time. And so also visualizing the threat space is going to be very important for everything that they can do. So it's great that I think, you know, I mentioned that second sort of innovation and promising result about JADC2 planning using that NPPS over deep reinforcement learning.
Rob Hyland [00:32:59]:
I think there's another thing where, you know, visualizing even the space domain that we have this one capability for, you know, actually visualizing the space situation awareness and then running, you know, potential operations and threats and understanding what's going to be available for comms because you can't Just dial up a satellite. Right. And no matter what your rank is. So I guess where I kind of hit to the a little bit about the suggestion is that we need to be using our full end to end capability. But I'll circle back now to this knife fight. So we are at and then indopacom specifically. So we are absolutely in an AI knife fight with China. They have a nationwide imperative to be the global world power and AI, that's their moonshot and they're doing pretty well.
Rob Hyland [00:33:55]:
So we tried to sanction them and so forth. And so we're now figuring out this new large language model called deep seq R1. What it does and doesn't do to some degree. We drove them to do innovation at Low Swap.
Ken Miller [00:34:11]:
So can you tell us a little bit about Deep Seq the deep seq R1? I'm not familiar with that program.
Rob Hyland [00:34:16]:
Okay. You know, we have a ChatGPT and Gemini and dozens of other large language models that take, you know, hundreds of billions of dollars to train or millions I should say. You know, it depends on where this is one of the debates. It's like, well, there's a lot of research and then they're training the latest thing. How does it take to train this thing? Well, China announced and had a paper from a startup company called DeepSeek and they have a large language model that is outperforming in some cases the best that we have in ChatGPT. And an R1 is one of the particular things. They also have V3 and the notable thing is they said that it trained in. Yeah, I think it was hundreds of millions is what our like chatgpt.
Rob Hyland [00:35:10]:
It changed. It trained in like 6 mil. So that's like seems very achievable. We don't know how much training and investment was prior to that. But what it is doing is it's able to do high performance at lower compute needs. We're figuring that out. So that was a bit of a surprise for some people. I actually wasn't surprised.
Rob Hyland [00:35:34]:
I think there'll be more like this. So that's my point is about the, the night fight.
Ken Miller [00:35:39]:
So in the remaining time that we have, I do want to talk a little bit about kind of what's on the horizon because obviously that's what that's the force pulling us along is the promises that are out there. So talk to us a little bit about, you know, kind of from your vantage point. You've been in this 30 years, so you've really tracked a lot of the history of this and where we've been and where we're going. What are some of the, the recommendations of what we need to focus our attention on? Whether it's through more experimentation, more funding, more manpower, whatever. What, what are some of the, the drivers that you, you recommend or you're looking at specifically?
Rob Hyland [00:36:20]:
If people haven't looked into, you know, hybrid AI neurosymbolic methods for applying AI to very complex and military sensitive type of applications, that, that's a very hot sort of area of research, but also it's an area of application. So I'll give you a quick example that you have these large language models that can answer all these things and do a research paper in 20 minutes and it blows my mind. It really is tremendous. But it does these errors. It's not a complete model of the way the brain works. It doesn't interact with real physical systems and so forth. So researchers like at MIT know what those limits are and what it can't do is things like a logic puzzle to say, oh say you're sitting down for a little dinner party and Sally is vegetarian and you know, Ron likes this, wants to be in front and so forth. The ChatGPT just fails at that quick logic puzzle.
Rob Hyland [00:37:21]:
So the MIT folks came up with a, well, I shouldn't say just the mit, but the research community came up with a data set called Clever and it stumbles up the large language models. So the hybrid AI was able to solve the neural symbolic approaches, were able to solve CLEVER very quickly and then we had to make another thing called clever. So anyway, the point there is that if you have this bringing together that symbolic approach with programming languages and the ability to learn with deep neural nets, you're able to evolve the systems more quickly, you're able to have lower data set needs because we talked about data, data being everywhere, but we don't necessarily have data in the military context. So that's one. Then I'll throw in that you also one of my recommendations is to get mature in your ML apps pipeline and that is to say these AI learned models are going to be part of everything we're doing. And in the military context it's really hard to even acquire AI, let alone maintain AI. So we want a pipeline where you're developing it and before it goes on a mission, you have done the V and V with extreme rigor that you know what it's going to do and not do. And then when it's out there you have ways to model when it ran into the corner cases.
Rob Hyland [00:38:48]:
So when you come back you can say, oh, here's the corner cases because you're never ever going to be able to test for everything. So that's another thing.
Ken Miller [00:38:58]:
Great. So just kind of put a bow on this conversation. When you look out at, you know, the current security climate, military defense climate, obviously a new administration coming in, but what some of the conversations that are having, you know, we have to dramatically reform our military structure and everything to kind of meet today's threats. Now this gets outside of AI per se, but obviously technology is driving this awareness that we're not structured as a military. Well, we don't have the right people in place. And so I want to just get your thoughts on the human element, the human on, we call it, you know, centered on human, human and on the loop, human in loop, whatever, however, wherever you want to put the human. We don't have enough humans that can do this type of work. So we have, we're having trouble filling all the operational spots, all the technical expertise.
Ken Miller [00:40:00]:
You know, we, we lean on companies like Charles river and agencies like darpa and you all do great work. But from a manpower kind of a force structure, the, the people aspect, where, where do we need to go on that? What is something that is, is there a recommendation or a challenge that we have to tackle on the, on the man's power side?
Rob Hyland [00:40:21]:
Yeah, that's what a great question. Yeah, especially just looking at the great power competition, India people. Yeah, we don't have enough people. And then you know, some of the things I mentioned, if you take a certain approach that you just need more specialized people while they can get jobs elsewhere in the tech, you know, all you need to do is say AI and startup and you know, throw money at you. So this is the first class concern. And so I hit the theme about, okay, now it's time to think outside the box again. From the MSO and so forth, you really want to tackle this. I think that really thinking deliberately about the team that you're putting together is one type of thing.
Rob Hyland [00:41:07]:
So first I'll just hit. Thinking deliberately about establishing a cross functional team is very important and then investing in upcoming talent. So it's easy to go to the same places and get the folks that will just show up. But think more broadly. You do want computer science, you want engineering, you want your AI modeling, but try to pull from commercial. I think people will be sort of empowered and excited about the national security mission if you kind of can put it in the reality. But then also you need the hardware folks and people understand network and comm. And cyber.
Rob Hyland [00:41:47]:
And then the last thing in terms of the human equation is the end user. It really matters to have an end user on your team. So where I'm going with this is that, yeah, that sounds like I need more, more diverse people to do anything, you know, that that's going to compete with China putting this much on there. But I think once you get a team together and they're really firing on all these cylinders, that will actually, you know, draw more people into, into dod. And I'll say one other thing. You need communications people like yourself, right. To, to explain why this is neat. Why this is not just neat.
Rob Hyland [00:42:25]:
Yet. Yes, it's intellectually interesting. It's the most amazing work I've ever done is, is in, in this space that we've been talking about today. And you know, if, if someone didn't go out like yourself and get the word out, I, you know what I thought that my career would be, you know, working for IBM or, or something like that.
Ken Miller [00:42:44]:
Well, thanks. I mean, and that's really the purpose of this show. I mean, for the, for our listeners who know my background. I, I have no background in. I'm, I've never served in the military, nor do I have an engineering degree, so I'm always kind of the outsider. But I just find this field fascinating and there's so much opportunity, it's driving the world forward in so many ways that we're still discovering. So it's a, it's an exciting field. Really appreciate you taking time to join me.
Ken Miller [00:43:11]:
You know, as always, with all these episodes, all these topics, we run out of time. I mean, there's, we, we could talk about this for much longer. But really, really do appreciate you taking time to join me. Hopefully I'll have you back or someone from your, from your team back on here in the future to kind of give us an update on where things are going. Because this is a topic that we hear a lot from our listeners. They want to talk lot more about. There's so much to unpack and, and, and would love to have you back here in, in the future.
Rob Hyland [00:43:38]:
Well, Ken, thank you so much. It's been my pleasure. We hit a lot of ground and AI is a huge field. So, yeah, this was, this was a tremendous conversation. I appreciate it.
Ken Miller [00:43:50]:
That will conclude this episode of From the Crow's Nest. I'd like to thank my guest, Rob Hyland from Charles river analytics for joining me. As always, please take a moment to review, share and subscribe to this podcast. We always enjoy hearing from our listeners and we appreciate all the engagement and recommendations. Thanks for listening.
Creators and Guests
