How to Trust AI on the Battlefield
Jeff Druce [00:00:02]:
At this point, we're seeing such a rapid explosion in the advancement of many of these AI tech. The slowdown may not be the tech being ready, but the humans being ready to use the tech in an effective way.
Ken Miller [00:00:22]:
Welcome to From the Crows' Nest. I'm your host, Ken Miller from the Association of Bold Crows. In today's episode, we are going to talk about artificial intelligence and how we develop complex systems to adapt to novel situations and explain the reasoning behind the artificial intelligence capability. And I am pleased to be here with senior scientist from Charles River Analytics, Jeff Druce, who is an expert on human centered AI. Jeff, thanks for joining me here on from the Crow's Nest. It's great to have you on the show.
Jeff Druce [00:00:55]:
Happy to be here. Thank you.
Ken Miller [00:00:56]:
All right, so just to get started, you know, we've had Charles river analytics on the show before. Fantastic crop of subject matter experts. Really pleased to have you on to discuss this topic. Can you tell us a little bit about yourself, your background and how Charles river analytics is engaged in AI and then we'll get into a little bit more of the discussion of the capability.
Jeff Druce [00:01:17]:
As you said, I'm Jeff Druce, Senior Scientist here at Charles River Analytics. I've been with the company for going on better part of a decade, which is kind of crazy to say, but I have a background in physics and math from the University of Michigan. Took a few years off there before grad school, then went on to get a PhD in engineering from University of Minnesota. Did an internship with the Air Force Research Laboratory, the AFRL in Dayton, Ohio. That got me into machine learning, artificial intelligence, kind of optimization and yeah, just really thought it was super interesting topic. From there I went to Charles river analytics and immediately got started in applied artificial intelligence, which Charles river has been involved in for a long time. I think they were founded in 1983 and we've been doing applied AI pretty much since, you know, since the fuzzy logic days. AI kind of took a turn towards deep learning around 2014, 2015 or a resurgence of deep learning, I should say.
Jeff Druce [00:02:20]:
And yeah, we've been trying to push forward the basics of it as well as the application of it ever since then. And my particular interest and passion in it is providing explainability and enhanced verification and validation to AI driven systems.
Ken Miller [00:02:36]:
With that as an introduction, I want to kind of present the problem out of the gate here. You know, we've talked in the national security space about AI for really decades, but it's really been in the last, I would say maybe five to 10 years where it's really, it feels like it's taken off where you can start to see the progression of AI really becoming a centerpiece of future military operations and how we employ that and how we can kind of more efficiently gain an advantage in the battle space using AI. So just, you know, taking a look back over the last, you know, five to 10 years with the development generally, how has warfighting changed both with the inception or the more regular use of AI and how has current operations been driving AI development and research over the last number of years here as we are, you know, here in 2025?
Jeff Druce [00:03:28]:
The theater of modern warfare has really become more complex in the past decade with the introduction of, of AI driven systems. Before, if you have to make a decision based on information available, there was a very limited set of communications, things were slow, they were over radio, they were hand encoded, they were operating at human level speed. But as soon as you start bringing in electronic reasoning, things done by artificial intelligence, then they can happen at a much more rapid level. For a very quick example, if you're trying to perform a strategic maneuver, if you're going in a tabletop exercise, you know, think of people moving around ponds on a board. How many different scenarios could you, could you go through, come with different combinations moving around things physically versus if you have a simulated environment where you can do a million operations per second, you could manually permute through all the different combinations that could occur in order to make more effective decisions from a strategic level. And that's just from a decision making standpoint. You could also think of bringing in AI, which has happened a lot in for example, the Russian Ukraine conflict for ATR or automatic target recognition, where you can leverage AI for perception. Things like classifiers, full motion, video processing, tracking objects with high fidelity and high resolution.
Jeff Druce [00:04:50]:
All of those things are now entering, entering the arena where you can remove the necessity of manual human inspection and manual human reasoning from the equation, at least at all aspects. We still have human in the loop, at least in the United States at all these various places. But you can leverage AI for very fast perception, very fast reasoning over a larger, broader context than you could do manually with just two minutes.
Ken Miller [00:05:19]:
And so a lot of times when we think of AI, I think our minds go to speed. Obviously there's so much AI driven capabilities just in our regular economy where we're always trying to do things a little bit faster and we say more efficiently. But sometimes efficiency is something that's a little bit more nuanced in terms of how we use technology in workspaces and missions. And so forth. But I want to talk a little bit about this notion of trusting the speed with which AI works, because I can imagine that if you're only looking at the speed, you're going to relax a lot of the other standards that you need. You mentioned trust in terms of the trust in the system, the reasoning in the system, and there's a lot of other pieces that while it happens fast with artificial intelligence, we need to have that human understanding somewhere in that process to say, okay, yeah, this is actually the algorithm that is working on or something is in line with, is taking us toward where we want to go on the mission. So how has this intersection of trusting AI helping more efficient strategic planning for missions, how is that integrated with simply just understanding the speed with which you can run through scenarios in the battle space?
Jeff Druce [00:06:38]:
Well, that's interesting because like as of now, a machine can generate a complex course of action, a koa, very, very rapidly. But it kind of, you need to slow things down so the human can understand what's going on. Because like under the hood, you don't know what the reasoning process was for this course of action that was generated by an AI entity. So if you can have that slow down and unpack its reasoning, then you can still leverage the, the vast speed ups of producing the strategic maneuver. Because it still could take, you know, humans weeks of planning, literally weeks of planning to gather all the necessary information and run through the combinations to get this. Whereas an AI could generate it and virtually instantly. But you would need to unpack its reasoning process in order to trust it. Human trust is a very interesting thing.
Jeff Druce [00:07:36]:
Accepting technology is. User acceptance is a fascinating topic that goes back to the industrial revolution, bringing in trust and automation effectively. Will this thing do a good job? Do you understand its logic? Can I trust it? So I think even though the machines can, the machines, I mean, AI driven reasoning can make these fast decisions if you want it to be trusted. If you want a human in the loop, there needs to be that mechanism to unpack it. And that quickness is basically as fast as the human can understand and trust or not trust what the reasoning produced by the AI machine is.
Ken Miller [00:08:15]:
So now Charles river analytics, an FFRDCA of a federally funded research development center, you do Federal research for DoD in different capacities. Within that context, what are some of the programs that you are working on that really kind of show the promise or speak to what you're able to do? As AI continues to evolve at such a rapid pace, I think it's even hard for us to understand where we're going with some of this, but what are some of the key programs that you see are or capabilities? If you can't talk specific program, what are some of the capabilities that you see are really shaping the way that we fight out in the field today?
Jeff Druce [00:08:52]:
I think the major one that I'm currently working on is the MERLIN staff, which is part of the DARPA Scepter program. And that is something that can generate. That is a program that is looking to leverage deep reinforcement learning based agents to generate a large amount of courses of action in order to make better and faster decisions in a complex space. And the second one that I think nicely complements that is the RELAX effort, which is endeavors to add explainability to those same or similar deep reinforcement learning agents so that you can trust them, you can understand them.
Ken Miller [00:09:33]:
So is relax, is that a program name or is that an acronym?
Jeff Druce [00:09:38]:
RELAX is an acronym for Reinforcement Learning with adaptive explainability.
Ken Miller [00:09:42]:
So what exactly then is RELAX trying to solve? What problem is that trying to solve in terms of. For the air force and DoD operators.
Jeff Druce [00:09:51]:
In the field, Reinforcement learning is intended to solve the problem of adding explainability to deep reinforcement learning agents. So to unpack that a little bit, reinforcement learning is a branch of artificial intelligence where you're producing autonomous agents that are used to make decisions or designed to take actions. That could be something that as low level as motor controls or as high level as moving an entire battalion to a particular location via a strategic path. It can be enhanced with a deep neural network to actually make that decision. So back when it was originally used, you could have like tables where the policy, which is the action generator, was just defined by something simple like a table or a linear model that just uses simple information, adds and does sums and products on them. It produces an action that's well and good, but it currently describes scenarios of a certain complexity because you can only you are limited by the capacity of the policy's underlying architecture to make the decision. So in the late 2000 teens we started, we as in the research community started using neural networks to use as the policy. So you had a much more higher modeling capacity that can make much, much more sophisticated decisions based on the information available to generate those actions.
Jeff Druce [00:11:21]:
Given the observations of the world, there was a more sophisticated training procedure that needed to happen in order to train that neural network. But it became possible of doing that with the advances in compute the last few years. However, the big problem with neural networks is that they're inherently opaque. You can bring in a large data structure like RGB images There can be thousands, millions of computations that map that input to the output and the user has no idea really what happened under the hood in order to produce that output. Because it was just like imagine 1000 linear models stacked on top of one another. And by the time you go from the input to the output, you are left with no idea as to what about the input led to this decision. What if I change this input to this different scenario? Why didn't you consider this other avenue of ingress? Like there are just a lot of things that are not transparent to a user that wants to leverage these actions. So the idea is relax can augment that system and by adding explainability to the actions generated by the deep reinforcement learning agent.
Ken Miller [00:12:35]:
So with that then you're basically addressing kind of a, a huge gap that existed in AI where we were talking earlier, where it wasn't just the speed of the decision, but you need to understand why they were making the decision because that allows you to understand what other capabilities you might need to bring to the table or how it might affect other things in theater that might not be apparent to, or clear within the, within that determination by the AI system. So if they, if they're making a decision like here's the path you need to take, it might be too vague for us, for us to know exactly what's involved in getting to that solution. Or is is it more of a just helping military leaders understand, hey, this is exactly why we're making this decision along the way, so that we can then adapt more quickly from the human side to try to keep pace. So how, how does that explainability kind of give you leverage in a battle space over your, your adversary when you're talking the speed of the fight?
Jeff Druce [00:13:37]:
Yeah. So I think it's a useful example is something that came about when AlphaZero first, first came into the mix. And Alpha Zero was a AI intended or that was designed to play Go. And at first there was a, a bunch of examples that were fed into the original, the original system to that enable the users sort of understand like what decision making was. There was train, train sets that were kind of curated in a way to help it learn. And then eventually Google DeepMind developed this. They found that if they just had the thing just play itself over literally millions of games, it developed these novel strategies that were really, that were really fast. And as it got better and better, they started beating Go Masters at this.
Jeff Druce [00:14:24]:
And there was a known called alien move that this AI took that no one understood. None of the Go. The Go Masters Understood ended up being of critical importance to defeat the human player. And so if you can imagine if we have a confrontation and we can leverage something like this model, that can bring in a novel strategy to do something that is incredibly effective of a tactic that is not known. Because if the adversary knows what you're going to do, if you're going by the books, well, that's pretty easy to defend. But if it's a novel, novel tactics, then it can be used to your advantage to gain superiority. But if you don't know what it's doing, it's kind of hard to trust. This varies, could be of critical importance.
Jeff Druce [00:15:14]:
There are people's, people's lives are on the line. You need to be confident that this decision, this recommendation for an action is legitimate. So that's where explainability comes in. You can have these awesome AI made tactics and strategies, but if you can't understand what it's trying to do and why it's not reasonable to trust it, in my opinion, we've, you know, everyone's heard, everyone, most people have heard about hallucinations and generative models. We know about adversarial attacks and regular deep networks. There are ways in which these AI systems can be hat they're very vulnerable, just like any other tool. And AI systems are tools. They're tools to be leveraged.
Jeff Druce [00:15:51]:
Tools need to be calibrated, known and trusted. And having a large complex system that generates very strange seeming strategies, even by experts without evidence and reasoning to back up why they're doing what they're doing, that's not something that can be trusted. You have this really complex awesome decision making that could be used, but you need something to crack open the logic in order to view it, in order to trust it and then be able to leverage it. And you need to do that quickly and effectively so that you can still maintain the advantage of using that tool.
Ken Miller [00:16:23]:
So how is that explanation that explained logic then translated into the user or the, you know, the combatant commander, whoever's kind of using it to get that novel approach to the battle, how is that explained to them? Because I can imagine that like, you know, if I'm sitting in a, in a classroom and a teacher is explaining something that's advanced, they might explain it, but I still don't necessarily understand it. So like there's, like there can still be a gap between explaining, understanding. So what kind of gap or what kind of steps are you taking with, with relax to make sure that the explanation that comes from RELAX is understood? I would imagine there's a whole host of other training like here's how, this is how the AI systems explain it and this is how you need to understand that explanation.
Jeff Druce [00:17:10]:
That's a great question. And that's where that a and relax comes from, the adaptability. And one thing that was really a bit surprising to me to the degree in the Camel Xai program, which was a DARPA program, the explanations are very dependent or should be dependent on the user. Like if you're offering explanations to an AI specialist or if you're offering explanations to a general, they're very different in their nature. We needed to adapt our explanations to something that the human can understand and the particular human can understand. And what we did was a collection of user studies with varying types of explanations. Some of them are narrative, some of them are graphical or combinations of them in order to test the level of understanding that they really have of the AI under the hood. So the shorter answer, like every good question is the answer is it depends.
Jeff Druce [00:18:08]:
It depends on the person. What are they trying to understand and make. It's very deeply entrenched in user acceptance. And these systems ultimately are going to have to have really, really deep technology and deep reinforcement learning and also into user systems, the HMI or human machine interfaces, what information they have are critical. And some people, even at the same level, even if they're both, for example, just users, some person may like a narrative, one person may like graphs or pictures or if they're augmented in some way. And we found that for example, counterfactual explanations are really helpful to gain a deeper understanding. Counterfactual in essence is what would need to happen in order for something different to happen. So for example, we had an AI that would have taking a different defensive posture if an enemy health was below a certain percentage.
Jeff Druce [00:19:04]:
So that helps a person understand what are the important drivers of the decision makers of, of the AI and what it would have done in different, different circumstances. So it's very dependent on the scenario.
Ken Miller [00:19:16]:
All right, so. So you mentioned that you did a couple studies. Can you, can you dive in a little bit more detail about how those studies were executed and carried out and how the recommendations from those studies have influenced the capability of relax on. On the user front.
Jeff Druce [00:19:32]:
Sure. It's been a couple years now, so I might, I'm not sure how good of a grade when I get on this quiz, but like any good scientific study, we started out with hypotheses and we use try to gather information to test the validity of those and the fundamental questions that we wanted to to answer were do explanation help a user's mental model become more accurate for how an AI really operates? Will they perform better with an agent they understand, and will they trust that agent more if they understand it better? And those were the three hypotheses that we tested. We had a baseline scenario where a user just used an AI with no explanations. They just used a deep reinforcement learning agent. They had to basically know when to pick to use the agent and when not to use the pick the agent. So what we did is we crafted a scenario where the agent would poorly and the agent would perform well. We knew about it, we basically hacked it to make it, you know, do well and do bad in different scenarios. And then through a series of explanations, we helped the user understand what was important to the agent's decision making process.
Jeff Druce [00:20:38]:
And we revealed that via our HMI interface for the case where we did give the explanations. And then we kept that away in our baseline test, kind of like standard A, B testing. And what we found, we ran through several user groups giving some access to the interface and some not. And we found that we asked a series of questions to basically test this where we knew the correct answers, we knew what the decision drivers were of the agent, and we asked the user their version of the mental model and we graded that for accuracy for what actually drives the agent's decision decision making. We also tested them in terms of scenarios when they would pick to use the agent and not use the agent, knowing that it was good at some scenarios and bad for others. And then we gave them a Likert scale questionnaire for whether they trusted it and what scenarios they would trust. At Likert's just a one through five, you know, five is the most you would trust that zero or one was you don't trust it at all. And what we found were statistically significant results that explanations helped the user understand the decision making of the agent, that they were more effective at using the agent and scenarios in which they could do better, and that their, their trust of the agent overall increased.
Jeff Druce [00:21:57]:
And I should say the explanations that we provided were both graphical and had text and kind of a couple that were combination. I get into the details and that's of interest. But yeah, we offered different explanation procedures that were kind of, you know, adaptive. They could click on one or the other or they could not use it. They use whatever explanation tool they found the most, the most effective. Now this was under a former project and that very much, you know, provided us with a springboard to jump into the relax effort where we are considering more complicated agents. Neurosymbolic agents for that are used in the Ceptra program that are taking on more complex scenarios.
Ken Miller [00:22:36]:
So neurosymbolic, I've never heard. I haven't heard that term before.
Jeff Druce [00:22:39]:
Yeah. So basically it is something that uses both a modern like deep architecture like people call deep neural networks. That's where the neural comes from. Something that uses a neural network under the hood and symbolic means like you are using a variables that are human defined and well and well known. That's kind of like old school AI where using features like I don't know, number of rooms in an apartment, the age of someone, their blood pressure. Those are symbolic things. You know what they, they have real meaning associated with them. Whereas in like a deep neural network just takes in an rgb image better 0 to 255 values over.
Jeff Druce [00:23:16]:
Over a 512 by 512 inch. That doesn't mean really anything in of itself. It's just a collection of. Collection of values of integers. So yeah, it's a architecture that utilizes both neural networks and symbolic elements.
Ken Miller [00:23:30]:
So in your effort for relax to kind of sift through all the data, where does that data come from? Is that something that is drawn from multiple nodes throughout a network or is there something or is there a process to kind of define and limit what information or what data or what variables are considered when you run that program, Is it naturally just drawing in everything that it can from the battle space, from whatever collection data collections we have going on or is there something that we have to say, okay, we're going to just look at this slice of data to come up with a recommendation.
Jeff Druce [00:24:07]:
Yeah. So that kind of gets into the details of neural networks and typically you give a neural network all the information you can and it learns how to make the best decision. And there's kind of a cool example of this that highlights the need for explainability. And this is back from a line paper a linear method for adding explainability. They found for a dog classifier for huskies the it got 100% accuracy and people thought I was doing great. And it turned out you it just looked at snow. So if you gave any image with snow in it, it thought it was a husky. If you gave it a husky on the beach, it thought it, it was a different.
Jeff Druce [00:24:43]:
It thought it was a detriment dog or something else altogether. So a neural network will utilize any information you give it and try to maximize its what's called its for word function or minimize its loss function if that's depends on which one you pick, and it will do that in whatever way it can. And they can learn some strange, unintuitive ways. We saw some Covid classifiers that were just looking at patient digits at the bottom, on the outside. Everyone with COVID they actually had like an extra millimeter of gray at the top because it was like basically a leaked data set. Neural networks use whatever information they can to make a decision as accurately and easily, as effectively, as easily as they can. So deep reinforcement learning agents can have the same problem. They will give whatever.
Jeff Druce [00:25:32]:
Use whatever information they can. If you give them bad information or sort of like biased information that contains a cheap answer, they can behave strangely in a simulated environment, for example. You know, they can learn some strange behaviors that are very effective in a simulation. And if you don't dig into them, you think they're doing great. You know, like you imagine an old video game, you know, with link, where you stab in the bottom left corner. It was super hard to beat. But, you know, it was not really a good generalized policy. Just worked in that one case, if you don't have something to unpack it, it will use information in strange, unintuitive ways to maximize that reward function.
Jeff Druce [00:26:12]:
And it may not be doing a good job overall. So if you are about explainability, then you can do a good job. And I guess to answer your question, you know, you can restrict the information in some ways, but the general practice is to give it all the information you possibly can and let the system figure out the best way to use it.
Ken Miller [00:26:29]:
Is that process of just giving them, giving the system as much information as you can, is that a way to combat bias? We hear a lot of different times, you know, when you, when you're developing AI systems or capabilities across, and not just in military, but anywhere, it's very easy for bias to slip into that algorithm that you're. Or that capability that you're trying to put out. And so we don't want certain biases in the data set because it could cause some strange, like you mentioned, some strange explanations or some strange scenarios. Sometimes I guess bias is good though. So how do we, from a, from a human element, how do we either limit bias or ensure that whatever bias does get transmitted to the system that it's helpful to coming up with a effective solution or an effective response.
Jeff Druce [00:27:22]:
Unfortunately, I think that's like another. It depends answer, you know, because like as you sort of indicated, bias can be good. Like if you, if you have a testing scenario in which the same bias is there as the, as the training set, then you do a great job. But like at whatever, at whatever objective you're trying to do, if all of your cats are tabbies in your testing set and you give it only like tabbies for your, for, for your training, then it's going to do a great job. But you may not want it to do that if you're giving it different, different types of cats. You know, I'm not sure if that's the type of cat. Hopefully I got that right. But it is, it is.
Ken Miller [00:27:59]:
I mean I'm not a cat person, but I'm pretty sure you're, you're accurate.
Jeff Druce [00:28:01]:
But yeah, so I mean giving it more information can sort of help or hurt help very much dependent on this scenario. So there's, there's the phenomena in, in machine learning called overfit. So if you're, if you're overfit, you basically are very, very tailored to your training set and you don't generalize outside of it. And in that case you want to give it more and more diverse information or data in order to have it not overfit. Sometimes if you give it too much information that is trying to learn too complex of a decision, given the capacity of the model, then it doesn't ever train or converge in a nice way. So it's very much a balance in trying to have a model, you basically just need the correct model capacity given the training set and the training, the cardinality or the size of the training data set for the objective you're trying to accomplish. And it's really, really hard to know if you got that right. There are some indicators you can look at with held out training sets and validation and looking at convergence, but until you get it out in the field, it's pretty hard to know you got it right.
Jeff Druce [00:29:09]:
And then there's been a lot of cases where models, you know, people thought they did really, really well. The early models in ImageNet for example, you got them out, they get 90, 98% in training on the held out validation set. You start testing it with pictures on your phone, it's 10%. It's just, it's very, very, it's been tricky to do that. And I think bolstering these tools with explainability and peering a little deeper into them and their functionality can make them a little more usable in the real world, even though they may not do as good of a job sometimes if you consider this at training and for example on like a, a baseline test, you know, so it's tricky.
Ken Miller [00:29:46]:
So I, I, I would imagine Though in the field, when you're testing these, having more decentralized execution of any military mission or plan can be good in terms of gathering as much diverse or as much data as possible from all different sources. This is a program Relax and others you're working with it on with the Air Force. Is this a multi service endeavor or are you working with particular services on this? And how does the service, whether it's Air Force, Navy, army, they all have different mission sets, they all have different priorities. How does that affect the work of Relax and other AI systems? Because you're drawing data from a lot of different sources, different services, different capabilities. You have different biases. So how do you kind of sift that through to make it relevant for each service?
Jeff Druce [00:30:35]:
Yeah, I mean, I think.
Ken Miller [00:30:37]:
Or does it depend?
Jeff Druce [00:30:40]:
It depends. I mean it's like, you know, from computer science, there's a no free lunch theorem, you know, no one thing. There's no one ring to rule them all. You know, for, for the, for these AI models when they can do a really good job. It's just you can't, you know, just like a human, a human gets specialized in a particular task and you can't have someone that knows how to fix diesel engines go fix a turbine jet. It's like yeah, they're both, they're both pretty awesome things. But you know, I think smaller, simpler models that can go and deal with things like divided attention, that can go gather intelligence in an autonomous way such that they don't have to have 50 humans sitting there with goggles on or 500 or 5,000 out there guiding these systems. They can go act and behave in a reasonable way even if they're not individually the most efficient of the best.
Jeff Druce [00:31:31]:
If you've got quite a few out there that are doing a reasonably good job, then you can leverage that. It's all very dependent on the scenario. And no one model, at least, at least not yet. I don't know how crazy we'll get with LLMs, how massive they can be, how much they can be applied towards reinforcement learning. There's a lot of unknowns with the super modern advanced massive trillion plus parameter networks with what they can do are foundational models that they're called foundational models for robotics are. They haven't hit their moment yet like LLMs have, but people are thinking that could be the case and that could change the answer to my question. But that is a fundamental research endeavor that is yet to be overcome to make any model or a model work in any super broad variety of Circumstances.
Ken Miller [00:32:16]:
So just to kind of put a go on this, you mentioned some of the unknowns that are out there. So looking forward, it's you've been working on this program now for a couple years. It's kind of gone through some iterations. Looking forward in the next five years or something like that, where do you see this going? Like, what are some of the key challenges that you are currently addressing to come up with a solution in the near future? And what can we expect to begin seeing here in the coming years as this becomes a little bit more accepted in terms of military planning and development?
Jeff Druce [00:32:47]:
Well, my crystal ball's got a crack in it right now, so I can't really give you a perfect answer for that one. But I mean, I think it really depends on how fast of user acceptance we can get. I think at this point we're seeing such a rapid explosion in the advancement of many of these AI tech. The slowdown may not be the tech being ready, but the humans being ready to use the tech in an effective way. If you look at what the process has been for changing a metal in an aircraft wing, it takes 25 years to do that. And are we going to completely replace strategic decision making with AI? That would be crazy to do that in a short amount of time. I think the next five years personally is going to be understanding how to use AIs effectively. And what do we need to extract from the AIs in order to understand and trust them.
Jeff Druce [00:33:38]:
People don't use things they don't trust. That's been found with automation throughout history. So digging deeper into helping developers of these more complex systems really help help the users accept them and trust them in situations in which they should be trusted. They shouldn't be good about everything. Just like any tool tool, you want to use a screwdriver when a hammer is appropriate. Well, it's just not the right dang thing. You know, you need to same thing with AI tools and how can we get them in the hands of people is to make them better understood and more trusted and appropriately trusted.
Ken Miller [00:34:12]:
So if any of our listeners want to learn more, they can go to your website. It's cra.com for charlesriveranalytics.com and you have all the programs that you're working on in there and with great explanations in terms of how things work. It's actually provide some great tutorials on some of these capabilities. So it's cra.com and artificial intelligence will take you to more of the discussion there for AI. But Jeff really appreciate you taking time to join me here on from the Crow's Nest? This is a great conversation. Hopefully it opens up doors to future topics that we can have you back on the show and dive down deeper as relax and other similar capabilities continue to develop in the field. Thank you so much for joining me.
Jeff Druce [00:34:57]:
You're welcome. Happy to be here.
Ken Miller [00:34:59]:
Well, that would conclude this episode of from the Crow's Nest. I'd like to thank my guest Jeff Druce from Charles River Analytics for joining me. As always, please take a moment to review and share the podcast. We always enjoy hearing from our listeners, so please take a moment to let us know how we're doing. Also, check us out on social media and Instagram and YouTube and Facebook. We post there regularly updates on the show as well as clips and so forth. So please check that out. That's it for today.
Ken Miller [00:35:26]:
Thanks for listening.
Creators and Guests
