Advancing Cognitive Warfare: Unveiling Neuromorphic Frontiers

(exciting music)

- Welcome to "From the Crow's Nest,"

a podcast on electromagnetic
spectrum operations or EMSO.

I'm your host, Ken Miller,
Director of Advocacy and Outreach

for the Association of Old Crows.

Thanks for listening.

In this episode, I sit
down with Dr. Joseph Guerci

to discuss the evolution and outlook

for cognitive electronic warfare systems.

Dr. Guerci is an internationally
recognized leader

in the research and
development and transition

of next generation sensor
and cognitive systems.

He has over 30 years of experience

in government, industry, and academia,

including a seven-year term with DARPA,

the Defense Advanced
Research Projects Agency,

where he concluded his service

as the Director of the
Special Projects Office.

Before I get to him, just
a couple quick items.

First, believe it or not,

Congress just recently this weekend

completed its long overdue work

on the Fiscal 2024 federal budget

by passing a funding agreement

before embarking on a two-week recess,

or what they call district work period.

The spending package is $1.2 trillion,

and includes 824.3 billion

for the Pentagon and
defense related activities.

That's about 26.8 billion
more than Fiscal Year 2023,

or a 3% increase.

I'm in the process of putting together

some of the summary details
that we'll put that out

in the form of an article in
AOC's weekly eCrow Newsletter

probably next week,

and so be on the lookout for that.

It'll have a little bit more detail.

If you're a regular listener
to the show you'll know,

we've been critical of
Congress in the past.

We can talk all day about total
funding, spending increases,

percentages greater than
last year and so forth,

but the reality is that
when you don't pass a budget

for six months into the fiscal year,

it really becomes less
about the total money

and more about the delays
in program development,

missed milestones,

the loss in testing and
training, et cetera.

So it goes without saying
that it's important

for Congress to figure out a way

to get at least the defense
appropriations done on time.

You would think it shouldn't be that hard.

You know, the total federal
budget is $6.3 trillion,

but only about 1.7 trillion
of that is discretionary,

and that's the part that Congress funds

through its annual appropriations process.

So that's only about 27%
of the federal budget.

And of that, about 50% of
that is the defense bill.

So if Congress figures out a way

to just pass the defense bill,

you're talking close to 87%,
90% of the federal budget

basically done and running on time.

So, of course, you know,
defense being the largest pot

of discretionary funding opens it up

for being used for a lot
of political leverage,

but, you know, this is a
critical piece of legislation

that Congress has to focus on each year.

That being said, we already know

that don't expect anything in 2025,

This agreement that just passed

will fund the Defense
Department through September 30.

This being an election year
with Congress and the President,

we can expect CRs to continue,

so Congress will pass CR on September 30

to fund probably until after the election,

and we'll see what happens after that.

But hopefully Congress
can figure out a way

to do a better job at getting
the defense bill done on time.

I saw an interesting chart
in the news the other day,

and it actually highlighted the problem

that we face on this front,
because you would think that...

I always was under the impression

that Congress was a little bit better

at getting bills done on
time than it actually was,

but if you look at over since 1977,

Congress has only passed all
of its appropriations bills

on time four times since 1977,

and that was in '77, '89, '95, and '97.

Other years, various bills have
passed and others have not,

the defense bill being the
most regularly passed bill,

but still, the trend since
about 2007 has not been good.

So for the past 15, 20 years,
it's been quite abysmal.

So, you know, Congress has
its work cut out for it,

but hopefully, you know,
they can figure out a way

to, you know, focus their energies

on that bill moving forward.

The only other quick item
I wanted to mention is,

as you know, we've been
releasing bonus episodes

every two weeks on the off weeks

of our regular "From the
Crow's Nest" episodes.

These are going to form
our subscription episodes

available to only paid
subscribers and AOC members.

Now, we are in the process

of putting that subscription
together behind a paywall,

but until it does, they
are free for everyone,

so feel free to download 'em and listen.

The added benefit of this

is that if you are a
subscriber or an AOC member,

or until we get it to the pay
wall, it's open to everyone,

if you want to listen
in on the live recording

of that episode from the audience
and actually participate,

you can comment, ask
questions, and so forth,

you can do that.

So if you're interested in listening

or participating in the live recording

of those bonus episodes,
please go to crows.org,

contact us through that,

and we'll get you the link to participate.

Our next recording for the
bonus episode is next Tuesday,

I believe it's April 2 at
1:00 PM Eastern Standard Time.

And with that, I'd like to introduce

or welcome my guest for the
show today, Dr. Joseph Guerci.

He is an internationally recognized leader

in research and development and transition

of next generation sensor
and cognitive systems.

I had the pleasure of having Dr. Guerci

on AOC webinars sometime last year

to talk about cognitive
radar, cognitive EW.

It was a fascinating webinar.

As we were going through it, I
realized that this is a topic

that for those of you

who are familiar with our AOC webinars,

it's a little bit more of
a technical discussion.

We have presentation slides and so forth

that you can kind of go in deeper,

but I really felt like there was a need

to have this as a topic of
conversation on our podcast,

where you can kind of look at
it at the 30,000-foot level,

kind of come to a better understanding

of what we mean when we talk
about cognitive systems,

because we hear about them a lot,

but I don't think we really
dive into what they mean.

I know I don't.

I use the word and I don't
really understand the depth

of what those terms mean.

So I reached out to Dr. Guerci,

asked him to be on my podcast,

and Joe, it's great to have you here

on "From the Crows Nest,"

and thank you for taking time

out of your schedule to join us.

- Thank you, Ken. It's great to be here.

- All right, so just to get started,

you know, as I mentioned,

this is a topic that
mystifies a lot of people.

We use the terms, we think we
know what we're talking about,

but oftentimes we're kind
of mixing up definitions.

You've been in this field for decades,

and it's really not a new field.

We've thought about cognitive systems

and been working on them
for what, 20, 30 years,

but recently, within the last 10,

it's kind of picked up speed.

So could you give us a
little bit of insight

into why cognitive systems
are picking up pace

in terms of their opportunity
being in the spotlight today

and kind of where we came from
over the last 20, 30 years?

- Thanks, Ken. That framed
the question really well.

I will answer it directly,
but let me just first talk

about the definition of cognitive.

Unfortunately, the word
cognitive is not that well known,

I would say, or defined
by the general populace,

and that's especially true

when interacting with our
international colleagues.

You know, translation, many
things get lost in translation.

So cognition is actually

a very well-defined scientific term

in the biological and
psychiatric sciences.

So it relates to all of
the things we think about

of a person who's awake and functional,

like problem solving,
memory, things of that sort.

And in fact, the very first page,

the first chapter of my
book on cognitive radar,

which was first written back in 2010,

I actually used

the National Institute of
Health's definition of cognition,

and then you can very clearly see

how that translates into
engineering parlance.

Now, a lot of the confusion

is because people think the word cognitive

is synonymous with consciousness.

Science does not have a
definition of consciousness,

but it absolutely has a
definition of cognition.

And so to your question, you know, why...

It's been around for a while.
That is absolutely true.

We didn't exactly call
it cognitive radar and EW

back in the early 2000s, but
that's what we were doing,

especially during my time at DARPA.

And why were we doing it and
why is it so popular now?

And that's really because
of a couple of factors.

One, the electromagnetic
spectrum environment

is ever more congested
and ever more contested.

Lots going on.

If you turn on your antenna,

you're gonna see all kinds of things.

And if you transmit,

you're gonna elicit
all sorts of responses.

And it's becoming ever
more increasingly difficult

to sort that out.

The problem is even further exacerbated

by the fact that we have lots
of flexibility now globally

to transmit arbitrary waveforms,

to transmit signals that
hadn't been seen before.

You know, and back in the
old days, 30, 40 years ago,

especially radar transmitters
were much more constrained

in what they could do in terms of agility,

and they were much more
easily identifiable.

But today, that's all gone,

and that's due to solid state
front as replacing tubes

and magnetrons and klystrons
and things like that,

and of course, lots of embedded computing,

digital arbitrary waveform generators,

and so you need to actually
make decisions on the fly,

and you can only make decisions on the fly

if you understand what you're looking at.

And what really where
cognitive really comes in

and really separates itself

is this idea that you don't
just sit there passively

and listen and sort things out,

but you actually interact
with the environment,

you probe it,

and that probing can give
you a lot of information.

And that idea of probing

and enhancing your understanding

of the complete radar or jamming channel

is at the heart of cognitive waveform.

- And so we've seen this, you know,

particularly over the last 10 years.

You know, we talk about the spectrum

being, you know, contested and congested,

and the complexity of trying to figure out

how to transmit and receive the signals.

And so if I'm understanding you correctly,

it's, we oftentimes, I
think when we talk about AI,

we talk about it in terms of speed,

but it's not just in terms of speed,

it's in terms of understanding

and breaking down that complexity

and being able to more effectively

or adapt to a certain environment

to transmit or receive
or collect that signal.

Is that correct?

- Yeah, so in the old days,
you know, not that long ago,

you know, one would go out with a database

of what you expect to see,

and that database would, you
know, provide a of prescription

of how you're supposed to
behave in response to that.

As I mentioned before,
due to all of the advances

that I mentioned and other
issues, that just is obsolete.

You know, you literally
have to get on station,

see what's going on, and do
a lot more of that analysis

that used to be done offline, online.

So it's almost like you need

to take all our subject matter experts

down at the EW program office

and have them with you right in the plane,

making decisions in real time.

But obviously that's not
practical for a lot of reasons,

because sometimes the speed involved

needs to be faster than
humans can process.

But that idea is basically
what the whole point is, right?

It's basically to embed
subject matter expertise,

brain power into the embedded computing

so that on the fly you can reprogram

and update your databases and respond.

I mean, that's the best way to look at it.

The old OODA loop, if you will,

was we go up, we fly, we
see things, we come down,

we update the database, so next
time we go out, we're ready.

That's all gone.

I mean, you have to have
that OODA loop on the plane.

Now, I could get into a
lot of technical detail

about, well, how on earth do you do that?

And that gets into the weeds obviously,

of the nature of the
signals you're observing.

And, you know, at least today

we have to obey the laws
of physics as we know it,

and we also have to obey the
laws of information theory.

Just because you have a
high power transmitter

doesn't mean necessarily
that they have the waveform

that would really help them
do tracking or detection.

You know, there's physics
and information theory,

and those are the anchors
that you start from

when trying to do this OODA
loop, if you will, on the fly.

- So could you go into
that a little bit more,

because I feel like anytime

that there's kind of two forces at play,

we tend to confuse what we're focusing on

when you talk about, you know,
information and so forth,

and the waveform and what capability

an adversarial threat it has.

Could you talk a little bit
about how that has shown itself

in terms of some of the
development that you've done,

how we've adapted to that,

and kind of where that's leading us?

- In the context of electronic warfare,

a very key question is what is the nature

of that signal that you're observing?

So a good example would be,
let's say it's a spoof jammer,

like a DRFM, a digital RF
memory type of thing, right?

So on a radar scope, you
would see, let's say targets,

right, that look like targets
you may be interested in,

but are they real?

As you know, smart jamming can spoof you,

leak you fake things that are not there.

And so while I can't
get into all the details

on how you might address that

for obvious reasons of sensitivity,

it does lend itself to a
proactive probing approach

where you don't just sit there passively

and accept the world as
it's presented to you,

but you actually then design
probes to answer questions.

Is it a real target or
is it a false target

is a really good example

of probably one of the most
advanced cognitive EW functions,

for example, you know, in the context

of sorting out jamming signals.

And of course, and if
you flip the coin around

and you're the jammer,

then you're also playing
Spy v. Spy, right?

You're trying to figure out

whether or not what you are
doing is having an effect.

That's a very difficult thing these days,

especially, you know,
if you're anything other

than an extremely high powered jammer,

understanding the effect
that you're having on a radar

is not easy, generally speaking.

And so you have to get very sophisticated

in the way that you probe and then listen

and then update your knowledge base

as to what you're dealing with.

You know, again, what enables all that,

highly flexible front ends

so that you can transmit
arbitrary types of waveforms

that have interesting properties

and elicit interesting responses,

coupled with lots and lots of
high power embedded computing,

and that's where the intersection of AI

and cognitive EW takes place, right?

Because that's a lot of heavy lifting,

it's a lot of pattern
recognition, for example,

a lot of inference.

And of course, as you know,
the deep learning techniques

certainly have a play in that arena.

And that's why I think you're
beginning to see more and more

of this intersection
of AI and cognitive EW.

But again, I must stress
that cognitive systems

have existed, as you pointed out,

long before the modern deep learning AI

has sort of become all the rage.

- I wanted to ask you a little bit more

about the relationship

between AI, deep learning,
and cognitive systems.

Before we got on the show
here, you had mentioned

that, you know, cognitive
systems are a consumer of AI,

and I was wondering if you
could go into a little bit more

about discussing how those tie in,

because the two, they're not synonymous.

And so talk a little bit
about that relationship

and what we're maybe getting wrong

that we have to rethink or re-understand

to truly grasp what we're dealing with

when we talk about cognitive systems.

- Well, thank you for that question,

because this is really, really important.

A lot of people think cognitive
is synonymous with AI,

and it is not.

Again, I went through great pains

to do scientific definitions of cognition.

In Dr. Karen Haigh's book on cognitive EW

she spends a lot of time
also trying to clarify that.

But unfortunately, you
know, it's kind of easy,

'cause both sound like, you know,

intelligent machine learning
cognition, that sounds like AI.

All right, so again, cognition

is basically performing a subject
matter expertise function,

but on the fly.

Think of it as having the
world's best EW analyst brain

encapsulated on the plane.

And that's a form of expert systems,

you know, that is a form of AI,

and traditionally that's what what we use.

But cognitive systems is
definitely, as you said,

a consumer of AI.

It's not synonymous with it.

You can do cognitive radar and EW

without any of the modern
deep learning AI techniques.

Absolutely.

But however, incorporating
modern AI techniques,

would that make it even better?

Potentially, yes. The
answer is probably yes.

A good example would be looking
for certain features, right,

in a signal.

That's one thing that we care a lot about.

There's certain features we look for

to be able to sort it out, identify it.

And I think you're probably well aware

that modern AI is very, very
good at pattern recognition

when given the appropriate training data

or training environment.

And so rather than writing
out thousands of lines

of C++ code to do pattern
recognition, feature recognition,

the AI algorithm can implement that

in a deep learning network.

And by the way, blindingly fast,

especially if you implement it

on what are called neuromorphic chips,

that we talked a bit
about that previously.

I mean, that's...

- We'll get to that in just a moment.

I knew that's where we were heading.

- Yeah. But again, you're right.

Thank you for making the point

that they're not synonymous,
cognition and AI,

and thank you for making the point

that cognitive systems
are a consumer of AI.

They don't need AI to be cognitive,

but as I just mentioned in that example,

they actually can be quite empowering.

- So with the notion that AI
can empower cognitive systems,

and you mentioned that it
would probably be better,

and I would tend to agree,

but in what ways might it not
improve cognitive systems?

What can you do without AI today

in terms of cognitive systems

that really AI doesn't really
help you achieve your goal?

- I think it's best summed up

with the old computer science adage,

"Garbage in, garbage out."

A lot of people don't understand

that this revolution AI is
all about deep learning.

These are convolutional neural networks,

and how good they are depends on,

and this can't be emphasized enough,

how well they were trained.

By the way, it shouldn't be surprising

because we are neural networks.

Our brains are neural networks,

and if you weren't brought up properly

and educated properly, well,
you get what you get, right?

So the analogy is exactly the same.

And so the danger with AI is,

well, how do you know
that that training data

really represents what it's gonna see

when the shooting starts?

(exciting music)

- I wanted to touch on
the training aspect,

because we talk a lot in EW

about we need to have
realistic threat environments.

We need to, you know,
really train like we fight.

And that's becoming increasingly hard.

We opened the show, we
talked about, you know,

how crazy the spectrum is these days.

You can't base your AI on realistic,

you can't get maybe that
realistic data from training

without kind of rethinking

how you conduct training in general,

and that's where it gets
into modeling and simulation.

And I think this touches,

then this gets into the
neuromorphic chip concept.

And that's where I wanna kind of go to,

like how do we model and
simulate a training environment

that gives us the data that we need,

that we know it's accurate,
our systems can learn from,

so that when we do go into the real fight,

we are actually using the knowledge

that we've built through the training.

- Yeah, that is the key
to the whole problem.

You put your finger right on it.

So AI is very good at
recognizing faces, for example.

We all have iPhones and what
have you, and why is that?

Because it's had plenty of
controlled training data.

This is a face, this is a
nose, these are, you know,

so sometimes we call that
annotated training data, whatever.

And the same goes with ChatGPT, right?

It's had access to
terabytes of plain text.

So there's copious
amounts of training data,

and the quality of the
training data was good enough,

as you can see in the results, right?

The conundrum for military applications,

and especially for electronic warfare,

is of course, wait a second,
where am I gonna get terabytes

of training data that is
absolutely reflective and realistic

of the environment I expect
to see when I'm fighting,

including all the potential
unpredictable things

that may pop up, new, you know,

they're called war reserve
modes, for example.

Well, I can give you an answer.

It's called digital engineering,

and that's both a cop out and
an answer at the same time.

Digital engineering is all
about replicating digitally,

including modeling and simulation,

all the physics, all the nuances,

all the technical constraints
associated with your system.

And as, you know, we've
discussed in the past,

B-21 bomber, next
generation air dominance,

have taken advantage
of digital engineering

to speed up their acquisition cycles.

Well, those same types
of tools can be co-opted

to train AI systems,

since if you have such an accurate

synthetic digital environment,

why not immerse the AI
into that environment

and let it learn and train it, you know,

to what it would expect
to see in the real world.

And I think as we've mentioned previously,

in our previous discussions,
you know, we have done that.

In my own research,

we have advanced radio frequency
design tools that we use

where we can really replicate...

By the way, there's site-specific too,

so you can pick a part of the
world where you wanna operate,

put emitters down, create that complex,

congested contested environment,

and create copious amounts of data.

It is synthetic, admittedly,

but it is definitely next
generation modern syn capability.

So if I had to give you
an off-the-cuff answer,

were passing each other on the elevator,

how do we solve the training problem

for military applications,

the simple answer would
be digital engineering,

as I've sort of outlined a little bit.

- And you mentioned it is being used,

and I've referenced it.

I read an article

about the next generation
air dominance fighter,

I think that's what it's called.

- Yes.

- And they were saying

it's gonna start replacing
the F-22s around 2030,

and I'm sitting there, I'm like,

"How in the world is that gonna happen?"

And I was not thinking digital
engineering at the time.

I was like, "They need at
least another 15 years."

I mean, this is DOD.
- Yeah.

- And when we were talking,

I'm like, "Oh, that's exactly
what they were using."

But digital engineering is not a panacea,

or it hasn't been fully
used to its potential

maybe is a better way of doing it,

'cause there are short...

'Cause you mentioned,

you're not able to maybe
digitally replicate

every aspect of a design.

And I'm thinking
particularly of, you know,

you might be able to do an environment,

but certain components within
that environment might not,

like the avionics on a jet,

you might have more of a challenging time

using digital engineering
for that purpose.

Could you talk a little bit
about some of the limitations,

and then also when you mentioned this,

you said it's both an
answer and a cop out.

So how is it a bit of a cop out?

But what are some of the
limitations of digital engineering?

- All right, let's unwrap
that question there. So...

- Yeah, I like to ask
very complex questions

that give you wide range
to answer any way you want.

- It's great.

So why was digital
engineering so successful

with B-21 and the next
generation air balance?

Well, think about it, right?

Aerodynamics codes, all the
major airframe manufacturers

have been perfecting

for decades and decades aerodynamic codes

so that when they put a CAD model

into their digital wind tunnel,

it actually very, very
accurately represents

what you're gonna see
when you actually fly it.

The same is true with stealth codes,

decades of perfecting the
digital MNS codes for stealth.

So, and engines, you know, the
major components of airframes

have wonderful tools that
have been vetted over decades.

And guess what?

When they utilize those
tools, make predictions,

those predictions are very accurate.

As you point out, though,
and this is very important,

there's a whole lot of stuff

that gets stuffed inside those airframes,

call 'em avionics, radar is
all communication systems,

and then there's antennas stuck on there.

That is the Wild West.

There is no uniform level of quality

in the digital engineering
tools for avionics, for example.

There are certainly pieces,

and some companies are better than others,

but it's nowhere near the level
of maturity and uniformity

as there is for, you know, aerodynamics

and things of that sort.

So it's a very important point.

So that's why it's sort of an
answer and a cop out, right?

If you have digital engineering
tools that are accurate,

it's an answer.

If you don't have them,
it's a cop out. (chuckles)

So, you know, I know that's the
first part of your question.

I think I forgot the second
part of your question.

- Well, I mean, I think
you actually did answer

both parts of the question,

'cause I was talking about,

you know, why it would be a cop out.

But, you know, with regard to
the challenges of avionics,

do we embrace that shortfall,

that challenge of digital engineering

when it comes to avionics?

Do we really truly understand

that when we embark on the overall design

of a system, an aircraft or what have you,

it would seem to me that, you
know, in today's fighting age

where spectrum dominance
is paramount to success,

avionics, what you have
on whatever system,

whether it be an antenna,
a radar, a jammer,

that's gotta be your starting point

in terms of your final system.

Is it a matter of simply
needing to spend a lot more time

maybe adapting digital
engineering to this field,

or is there something that we can do

to just kind of make
progress in this area?

- I would like what you just
said etched in stone somewhere,

because that's it.

We have to embrace digital engineering

in the electronic warfare world.

There's no other answer.

We're never going to be
able to go to a test range

and faithfully replicate what we would see

in a true, especially
peer-on-peer engagement,

without lots and lots of help,

which means a lot of synthetic signals

injected into the tests,
synthetic targets.

Some people call that live over sim,

so accommodation simulation
and live, you know, effects.

But we have to, and by the
way, I think, you know,

this kind of went unnoticed
by a lot of people,

but OSD came out in
December, I think it was,

and mandated that all
major defense programs

must employ digital engineering
unless they get a waiver.

And I thought, "Wow, I mean,
that's really quite something."

Now you just pointed
out, even on the B-21,

they can point to all
the digital engineering

for the avionics and what
have you and the stealth,

but they can't do that for
all the avionics inside.

So, but the DOD wants to go that way.

I think you know, there's
a digital engineering czar

out of the office of Secretary of Defense.

I think when people call
it digital engineering,

some people call it model-based
systems engineering,

MBSC is basically a synonym.

We don't have a choice.

And let me just add
one more wrinkle to it,

which is, well, wait a second,

who's creating the scenarios

that are used for training, right?

If I said before that
cognitive systems were invented

because we don't know exactly
what we're gonna be facing,

and yet you're saying,

but humans are creating the training data,

then there's a built-in flaw.

And the way that I've
seen to get around that,

and we're actually implementing it

on some of our research here
that we're doing for AFRL,

is let's let AI help create scenarios.

And why is that so powerful?

Well, DeepMind from Google
is the best example, right?

The original chess playing DeepMind

was trained by subject matter experts,

just the way we would do with
digital engineering tools

using humans to create the training data.

But when they let it play on its own,

they just created a training
environment, not data,

and let it, by trial and error,

making dumb move after dumb
move, eventually it learned.

And guess what? There's no
human that can beat it now.

So using AI to help with
the training portion

of another AI system is ironic,

but it seems to be the answer

to, well, how do you prepare
for the unknown unknowns?

- Well, if you can use
AI to train another AI

to design an environment to train against,

can you use AI through digital engineering

to really almost design your next system,

even though that's not
what, you know, like,

it can anticipate kind of
here's what you're gonna need,

here's a design of what
you're going to need,

not based on any sort
of real human modeling

or human outline.

It's just let AI run over and over again,

all the mistakes made in the past,

all the successes made in the past.

Here's exactly how your next system

is gonna have to look in
this threat environment

that we're training against,

and almost take it, in some
ways remove us a step further

from the design of the
next generation systems.

So is that already being done?

Is that how it's so fast today

with the next generation air dominance?

- So once again, I'm gonna
give you a mealy answer.

Yes and no.

So yes, we need to let
AI, I'll say run amok

and come up with all these
scenarios on its own.

A good example might be

instead of coming up with the
next generation air dominance,

why don't we just send it
a million stupid drones

and just overwhelm our adversary

with dirt cheap millions of drones, right?

That would be something that,

I mean, an AI system wouldn't
be constrained, right?

It would just come up with that answer.

So, but here's what I will say about that.

The good news is while
we can let AI run amok

and come up with scenarios that
we wouldn't have thought of,

we can then look at those
scenarios, and we can say,

"Is that really something
we need to worry about?"

A good term for that,

a colleague of mine came up with a great,

we call it black swan.

You know, you might recall the whole point

about black swan was there's
no such thing as black swans

until one day we actually
found a black swan.

And so we like to call this
the black swan analysis stage,

where you let AI come up with things

that no human would've thought of,

but are physically allowed
and technologically allowed,

and then you look at that.

So now is where humans
come back into play.

Because look, we don't
have unlimited resources,

we don't have unlimited time.

We can't spend trillions of dollars

on a single aircraft, for example,

so a subject matter has to
look at these black swan events

and decide whether we
really care about that.

And like I said, launching
a million dumb cheap UAVs

is certainly technically possible,

but are we really gonna,

is that how we're gonna base
our whole defense posture

is on something like that?

So it's a yes and no answer, right?

We're not letting go of
the reins completely,

only we're letting go of the reins

when we're asking AI
to use its imagination

to come up with things,

but we pull back on the reins

when we see what they've come up with

and decide whether or not

that's something we should care about.

- So we're talking about how
many things we can do with AI.

I wanna talk a little bit
more, kind of take a step back,

and continue talking a little
bit about how AI works.

And you had a slide in
your webinar presentation

that we were talking about
the relationship with AI,

and there's an aspect to AI

that's using neuromorphic
computing and neuromorphic chips,

and we were talking about this.

This concept just blew my mind,

because I really never
heard the term before.

So I wanted to kind of,

I wanna ask you to talk
a little bit about this.

What is this piece of the puzzle,

and what does it it hold
in terms of the future

for artificial intelligence,

and then feeding into
cognitive radar and EW?

- So cognitive radar, EW,

live and die by embedded systems, right?

They don't have the luxury
of living in a laboratory

with thousands of acres
of computers, right?

They have to take all their
resources on a plane or at UAB

or whatever platform and go into battle.

And so to really leverage the power of AI,

you need to implement them

on efficient embedded computing systems.

Right now, that means FPGAs, GPUs,

and those things are,
when all is said and done,

you know, all the peripherals required,

the ruggedization, the MIL-SPEC,

you're talking kilograms and kilowatts.

And as I pointed out,

there is a rather quiet
revolutionary part to AI

that's perhaps even bigger

than all the hullabaloo about ChatGPT,

and that's neuromorphic chips.

So neuromorphic chips don't implement

traditional digital flip-flop
circuits, things like that.

Essentially they actually, in silicon,

create neurons with interconnects.

And the whole point of a neural network

is the weighting that goes
onto those interconnects

from layer to layer.

And the interesting thing about that

is you've got companies like
BrainChip in Australia, right,

that is not by any stretch

using the most sophisticated foundry

to achieve ridiculous line lists

like conventional FPGAs and GPUs do.

Instead it's just a
different architecture.

But why is that such a big deal?

Well, in the case of BrainChip
as well as Intel and IBM,

these chips can be the
size of a postage stamp.

And because they're implementing

what are called spiking
neural networks, or SNNs,

they only draw power when
there's a change of state,

and that's a very short amount of time,

and it's relatively low-power.

So at the end of the day,

you have something the
size of a postage stamp

that's implementing a
very, very sophisticated

convolutional neural network solution

with grams and milliwatts

as opposed to kilograms and kilowatts.

And so to me, this is the revolution.

This is dawning. This is the
thing that changes everything.

So now you see this little UAV coming in,

and you don't think for a second
that it could do, you know,

the most sophisticated
electronic warfare functions,

for example.

Pulse sorting, feature
identification, geolocation,

all these things that require,

you know, thousands of lines of code

and lots of high-speed embedded computing,

all of a sudden it's
done on a postage stamp.

That's the crazy thing.

And by the way, in my
research we've done it.

we've implemented pulse, the interleaving,

we've implemented, you know, ATR,

specifically on the BrainChip
from Australia, by the way.

So really quite amazing.

- So where is this technology?

You said we've already done it.

We have a pretty good
understanding of what it can do.

And like you mentioned,
you know, a scenario

where whether it's a
UAV or whatever system,

I mean, something the
size of a postage stamp,

it completely changes size, weight, power,

all those considerations,

and makes almost anything a potential host

for that capability.
- Yeah.

- What are some of the next steps in this,

call it a revolution or rapid
evolution of technology?

I mean, because we obviously, you know,

a couple years ago there
was a CHIPS Act, you know,

trying to make sure that we,

in the development of a domestic
chip production capability,

Congress passed a CHIPS Act

to kind of help spur
on domestic foundries,

domestic capability to produce chips.

And does this kind of
fall into kind of the...

Is this benefiting from
that type of activity?

Is this part of the development

that's happened through the CHIPS Act?

Is there something more
that we need to be doing

to spur on this innovation?

- Well, the CHIPS Act is a good
thing domestically speaking.

And by the way, part of the CHIPS Act,

it is focused on neuromorphic
chips, by the way,

so that's good to know.

However, the real culprit is
the age-old valley of death,

bridging the valley of death.

And by the way, I spent
seven years at DARPA,

and even at DARPA with the
funds I had available to me,

bridging the gap between
S&T and Programs of Record

is still a herculean maze
of biblical proportions.

And so while you'll hear
lots of nice-sounding words

coming out of OSD and other places,

saying, you know, "We
gotta move things along.

We gotta spur small business. We gotta..."

it's all S&T funding.

There still is an extraordinary impediment

to getting new technologies
into Programs of Record.

And I, you know, I'm not
the only one saying that,

so don't take my word for it.

I can tell you lots of horror
stories, and I've done it.

I was successful while at DARPA.

So my stuff is on the F-35
and F-22, for example,

and other classified systems.

I mean, I know what it
takes to get it done.

Unfortunately, though
there's a lot of lip service

about overcoming that barrier,

it still has changed very little

in the 20 years since I've
been successful at DARPA

in transitioning.

So I'm sorry, but that's
biggest impediment.

And I know it's not a technical thing,

and I know there's lots of-

- But here's what concerns me about that,

is, you know, the valley of death,

I mean, that's been in our terminology,

in our lexicon for decades,

like you say, going way back, you know,

even before we even under, you know,

back in the nineties and eighties

when the technology, while
advanced at the time,

pales in comparison to
what we can do today,

the process hasn't changed.

And so like if we had a
valley of death back then,

how are we ever going to bridge it today

with as fast as technology is moving,

as fast as the solutions we
need to prototype and field.

I mean, you mentioned it's herculean.

I mean, it's almost beyond that it seems,

because our system hasn't
really changed that much

over the past 20, 30 years.

- Yeah, so maybe it's ironic,
I don't know the right word,

but on the S&T side,
OSD, the service labs,

you know, I would say that
they're pretty forward-leaning

and they're making good investments.

The problem is getting
into a Program of Record

is where the rubber hits the road,

and where things get fielded.

And so you look at the S&T budgets,

you look at the number of small businesses

getting DOD S&T funds,

and you could almost say, "Well,
they're a success," right?

I mean, we're giving small businesses,

they're coming up with great things.

But then look at how much of that

actually ends up in a Program of Record.

And let me just be clear.

I don't blame the Programs of Record,

because the game is stacked against them.

They, very often, especially
if it's newer technology,

they are having lots of problems

with getting the baseline system fielded.

There's cost overruns,
there's scheduling issues,

and so they're already with
2.95 strikes against them,

and now all of a sudden

you want to on-ramp and
entirely new capability

when they're already
behind the eight ball.

That's just impossible,

unless the whole culture of
Programs of Record changes

where, for example, you structure it

so that every year you have to answer

how are you dealing with obsolescence?

How are you keeping up?
Where are the on-ramps?

How successful were you with
these on-ramps, these upgrades,

all of these things?

Because until you fix that,

I don't care how much
money you spend on S&T,

you're not gonna get fielded.

- From a technology standpoint,

let's just, you know, assume for a second

that we make some progress

in the policy side of the equation

as it pertains to acquisition
and the valley of death.

From a technology perspective,

you've been following this for 20 years.

You know, where are some
of the opportunities

that are before you that you're like,

this is the direction we need to go in,

this is something that excites you

or keeps you awake at
night in a positive way,

of like this is promising

and it's gonna be your next pursuit?

- Well, we definitely

have to embrace cognitive
systems for sure.

I mean, I don't think
there's anyone out there

that would say we don't need
that kind of flexibility

and adaptability on the fly.

Now, we can argue over just
how much cognition we need

and the flavors.

That's fine. So there's that, right?

Let's all just accept that.

And then I think you
touched on this earlier,

you know, there's a big
push across all the services

on what's called the JSE,

which is the Joint Simulation Environment,

which is this grandiose vision

for having multi-user, multiplayer,

high fidelity training environments,
synthetic environments,

which, by the way, can
include live over sim,

so that our systems
become much more realistic

and reflective of what
they're really gonna see

when they get out into the real world.

Again, I come back to lots
of good things going on

on the S&T side.

You almost can't, you know,
you really can't argue with it,

but that transition to field its systems

and Programs of Record is
still very much broken,

and that's just a fact.

And it's not just me saying that.

You can ask anyone who's in the business

of trying to transition technology

to the Department of Defense,

and they'll tell you the same thing.

So, you know, again, S&T community,

doing a great job, I
think, generally speaking,

your DARPAs, your AFRLs, all of these,

but that transition
piece is just continuing.

And by the way, do our
adversaries have the same issues?

Some do, some don't, you know?

And this technology I'm talking
about, neuromorphic chips,

that's available to the world.

I mean, BrainChip is
an Australian company.

There's no ITAR restrictions, so.

- Well, and also I think it speaks

to the multidisciplinary
approach to technology today.

I mean, the neuromorphic chip,

I mean, it has military applications

you can obviously use it for,

but, I mean, you're gonna find this

in all various sectors
of an economy and society

and what we use in everyday
life, and so, you know-

- So Ken, let me just say
that the neuromorphic chip

that BrainChips makes from Australia

had nothing to do with electronic warfare.

It's designed to do image processing.

So one of the things we had to overcome

was take our electronic warfare I/Q data,

in-phase and quadrature
RF measurement data,

and put it into a format to
make it look like an image

so that the BrainChip
could actually digest it

and do something with it.

So you're absolutely right.

I mean, these chips are
not being designed for us

in the electronic warfare community,

but they're so powerful

that we were still able to get it to work.

Imagine if they put a little effort

into tailoring it to our needs.

Then you have a revolution.

So, sorry to interrupt you
there, but I just want...

You made a point and it's
very valid, you know.

- It's valid. It's valid, it's important.

I mean, it goes to just the possibilities

that are out there.

- Well, and to amplify that point,

all the advanced capabilities

that we have in our RF
systems, radar and EW,

most of that is driven by
the wireless community,

the trillion-dollar wireless community

compared to a paltry
radar and EW ecosystem.

So, you know, what's happening
in the commercial world

is where, and leveraging, you know,

commercial off-the-shelf technology

is a gargantuan piece of
staying up and keeping up,

and by the way, addressing
obsolescence as well, right?

If you have a piece of proprietary
hardware from the 1980s,

good luck, you know,
with obsolescence, right?

- Well, that, and also
hopefully, you know,

as we move down this path on standards

and open systems and so forth,

some of that will work its way in.

We can adapt some of that

so that as we struggle less
with obsolescence in the future

than we do now.

- We hope.
- Hopefully, yes. I mean-

- Again-
- We'll see.

But, I mean, I would
think that's the idea.

- I mean, look at the day-to-day pressures

that Programs of Record are under.

So I'm not gonna get into
all kinds of details here,

but we had a capability

that was vetted by the program offices

and was developed under SBIRS,

and went all the way
through to a Phase III SBIR.

We have flight-qualified software

to bring this much-needed
capability to the war fighter.

This is all a true story.

And all of a sudden the
program ran into scheduling

and budgetary constraints,

so they had a jettison the on-ramps,

and so a capability that was vetted

was a really important capability,

just got thrown to the curb
because of the everyday problems

that Programs of Record run into,

and that's not how they get judged, right?

They're judged on getting
that baseline system over...

Look, the F-35 was just
recently declared operational,

what, a month ago?

You gotta be kiddin' me.

- Well, Joe, I think this is a good spot

to, I mean, I feel like if we keep talking

we can keep going in
layer and layer and layer,

and I don't wanna put our
listeners through that,

but I think a good consolation prize

is to have you back on
the show in the future,

and we can go a little
bit deeper into this,

but I do really appreciate
you taking some time

to talk about this,

'cause this is a topic as of,
you know, really 24 hours ago,

I realized how often I just use the word,

and I never really understood
the depth of the definition

of what the words I was using,

so I really appreciate
you coming on the show,

kind of helping me understand this better,

and hopefully our listeners as well.

- Thank you, Ken.

You had great questions, great interview.

And let me give a shout out
to AOC. Great organization.

I'm personally, and my
company's a big supporter of AOC

and what you guys are doing,

so you're part of the solution,
not part of the problem.

- We appreciate that,

and, you know, appreciate
all that you've done for us

in terms of helping us understand
this really complex topic.

And really I do say this honestly,

I do hope to have you
back on the show here,

and there's no shortage of
topics of conversation for us,

so I appreciate you joining me.

- Thanks again, Ken.

- That will conclude this episode
of "From the Crow's Nest."

I'd like to thank my
guest, Dr. Joe Guerci,

for joining me for this discussion.

Also, don't forget to review, share,

and follow this podcast.

We always enjoy hearing
from our listeners,

so please take a moment to
let us know how we're doing.

That's it for today. Thanks for listening.

(exciting music)

(uptempo music)
- Voxtopica.

Creators and Guests

Advancing Cognitive Warfare: Unveiling Neuromorphic Frontiers
Broadcast by