Russia Gave Lancet an AI Brain. Ukraine Can Still Beat It - Here's How
Fooling AI-targeting systems is the new bleeding edge of warfare

A Russian Lancet drone, used to attack Kyiv, may have been found to have an autonomous targeting module (made by Nvidia), inside its remains.
This might sound familiar. Last July, I wrote about a Russian Shahed (the MS001) found with an American AI superchip that allows the drone itself to hunt and kill its own targets… restrained only by what it was trained on.
Now, Russia wants the ZALA Lancet to think for itself.
Military analyst Serhii “Flash” Beskrestnov said on March 17 that Russia likely shifted Lancet strikes toward Kyiv because Ukraine’s air defenses are getting harder to penetrate, and the Kremlin needed something headline-worthy to show Putin and the public.
That’s plausible…
But now, Russia is attempting to integrate autonomous targeting capabilities into the Lancet, including through AI modules based on Nvidia Jetson systems.
HUR also identified 62 electronic components in the two drones, most of them foreign made, (primarily American, Swiss, and Chinese) despite years of sanctions supposedly cutting Russia off from exactly this kind of technology.
I want to be precise about what HUR actually said, because the word “AI” has a remarkable ability to make otherwise intelligent people lose their goddamn minds.
HUR’s language is careful: Russia is attempting to implement elements of autonomous targeting. That’s not the same as a fully independent killer drone running around Kyiv while cackling in Terminator code.
It means Russia is trying to make the Lancet less dependent on a vulnerable human operator link, with the added benefit of making its own kill decisions in the terminal phase.
Okay, that sounds just as bad, actually.
For two years, a large part of Ukraine’s drone defense has been about breaking the link: Jam the drone, jam the GPS, cut the operator’s video feed, sever the control signal, and make the thing dumb enough to miss.
Sorry, I couldn’t resist this Spaceballs jamming clip:
That jamming logic still works. It may just not be enough anymore if the drone can still identify, reacquire, or commit to a target after the human link gets degraded.
And that changes the game in a very specific way…
What the fuck is Nvidia Jetson doing inside a Russian loitering munition?
As a reminder, I have a full AI in War breakdown that you can read here; I go into quite a bit of detail about Nvidia’s AI tech and how they’re trained to make the kill decision on their own. The primer is free, I don’t get any money from it, I just want us all to get smarter about AI in war. Although it would be cool if you subscribed; all the cool kids are doing it. Don’t be a square.
But if you’re short on time, here’s the skinny: The Nvidia Jetson family was built for robotics, autonomous vehicles, drones in the commercial sense, and embedded AI at the device level.
These boards take a heavy, trained AI model, compress the hell out of it, and stick it on a circuit board that can fit on a drone, plus give it access to the drone’s control surfaces.
That, my friends, is precisely how you build a terminator.
Nvidia has stated, ahem: “Our Jetson Orin modules are consumer-grade products sold to students, developers, and startups for a wide range of beneficial applications. They are not available in Russia and are not designed for military purposes (wink wink).”
Note: I don’t think they actually put the “wink wink” in their release, but it would be hilarious if they did…
Russia is putting these boards in killer drones anyway, sourced through gray market channels running through Hong Kong, Singapore, Turkey, and China.
So, what happens when you put a $500 Nvidia AI board in a Russian munition?
Well, the drone gets a small onboard brain for computer vision and edge inference. It can process camera imagery, run object recognition models, and make targeting decisions locally, without sending every frame back to a remote operator and waiting for a response.
Because of this, the aircraft does not need a continuous high-quality data link to complete the terminal phase of an attack.
The Russian Shahed MS001 I wrote about last year is powered by Nvidia Jetson Orin capable of 67 trillion operations per second, and can process thermal imagery, perform object recognition, telemetry, and real-time decision-making on the fly.
The Lancet isn’t there yet: HUR says Russia is attempting to add these elements, not that it’s achieved full terminal autonomy. This could also be HUR’s way to downplay the threat so as not to panic anybody, or give Russia more credit than they deserve.
But the engineering direction is clear, and the hardware foundation exists and is being smuggled into Russia.
Actually, I think there’s a larger question here about the scale of the problem: The CIA and DIA are likely the only ones to have a clear answer on this, but I would be interested to find out exactly how many Nvidia AI boards have made their way into Russia.
This kind of makes a difference. If it’s 10, that’s an annoying issue. If it’s 10,000, that’s a much bigger problem.
The practical battlefield consequence is that an operator can guide the Lancet most of the way to its target, then let the terminal phase ride on onboard recognition if the link gets degraded.
Or the system can reacquire a target after signal disruption.
Or it can narrow down possible targets faster than a human operator fumbling with a grainy feed and strike on its own.
How Ukraine beats an AI-guided Lancet
To be clear, this is not a solved problem. Ukraine can’t stop a loitering munition or one-way attack drone with jamming, no matter how big their antenna is (giggity).
Instead, we need a set of tactics and approaches that Ukraine can put to use in real time. But those tactics depend entirely on how onboard AI chooses its targets…
The Lancet would have had to have been trained in mother Russia using thousands of images of whatever it is the Kremlin wants the drone to strike.
Let’s say it’s a Ukrainian command post (CP).
(In reality Ukraine calls these Operational-Tactical Groups or OTUs, similar to American TOCs for Tactical Operations Centers; but let’s just use CP).
Presume Russia knows that all Ukrainian command posts share a particular feature… For instance, (and this is just an example) Ukraine always digs a latrine exactly 45 meters from the command post, eight out of ten times. It’s just SOP.
Or Ukraine always uses a particular tent (donated by Canada) for its CPs.
Or Ukraine always has three antennas sticking out of its CPs.
Someone in Russia would then have to manually train the AI on hundreds or thousands of images and video of Ukrainian command posts in different lighting conditions, weather, time of day, angles, etc.
In this sense, the killer drone is only as good as its training data. If the Russian trainer is exceptionally lazy, and only uploads five grainy images, the drone would be Gomer Pyle.

Knowing that, we can brainstorm some ways that Ukraine can defeat Russia’s AI targeting…
This list is not all-inclusive. Some of these are already being used. Some build on each other. These are just some of my ideas.
Drop your ideas in the comments…
First, I think deception and camouflage will become more important.
If the Lancet’s onboard model is trained to recognize an artillery silhouette, a CP, a hot engine deck, or a specific armored vehicle profile, Ukraine’s job is to make actual targets stop looking like the training data.
That means multispectral camouflage similar to Saab’s Barracuda MCS hiding not just visible shape but thermal signature and radar return.
It means breaking outlines with netting, berms, and shadow management. It means thermal masking to suppress the heat cues a computer vision model has likely learned to prioritize.
A human operator can sometimes read through camouflage because humans are annoyingly good at context.
A narrow onboard vision model is usually dumber.
It sees what it was trained to see. That’s exploitable, if Ukraine is disciplined about it.
Next, Ukraine needs dense, believable decoys.
What Ukraine needs is targeting ambiguity at scale: false revetments, fake emitters, visual clutter around real assets, mixed vehicle parks that force the model to make hard classification choices between multiple plausible targets.
Make the battlefield look like a “spot-the-difference” puzzle designed by someone who genuinely wants the model to fail.
Ukraine already understands the economics of deception better than most NATO staffs. The whole war has been a brutal education in how cheap misdirection forces expensive decisions.
AI-guided targeting doesn’t eliminate that logic. It just makes it more critical.
Next, Ukraine needs shorter exposure windows.
An AI vision model still needs data.
It needs frames, stable angles, and persistent visual truth about a target. The less time a target is static and visually legible, the harder confident classification becomes.
“Shoot and scoot” was already smart against counter-battery radar in the artillery world. In an AI-targeting environment, mobility is also a way of starving the model of stable visual input.
The drone needs the target to hold still long enough to be certain. Ukraine’s job is to make that certainty impossible.
As I’ve said before, in Ukraine, speed equals armor.
There is something to be said about how long, in seconds, it takes an Nvidia AI board to recognize a pattern, but I’ll save that for a future article because this one is already getting quite lengthy.
Next, cheaper kinetic kill layers are always useful.
If onboard guidance hurts EW effectiveness in the terminal phase, Ukraine needs more ways to physically kill or disrupt drones close in, before they reach the phase where autonomy matters.
Interceptor drones, autocannons, truck-mounted gun systems, portable anti-drone teams, lasers (pew pew), microwave weapons... Shit like this.
Finally, consider model poisoning and pattern warfare (DECOYS ON STEROIDS).
If Russia is training its targeting models on real battlefield footage, and it almost certainly is, then Ukraine just needs to control what the model sees.
In practice, that means flooding the battlefield with believable lies.
You don’t put one fake howitzer in a field. You put ten. You vary the shape. You vary the thermal signature. You park them in patterns that look real enough to pass a quick classification pass, then move them, swap them, or destroy them yourself so the visual record stays messy.
You make sure that every time a Lancet operator or onboard model “learns” what an artillery position looks like, there’s a decent chance it just learned the wrong lesson.
Do that often enough and you’re not just protecting one target. You’re degrading the enemy’s future targeting confidence.
Same idea with silhouettes. If Russian models are trained to recognize the outline of a Ukrainian Buk launcher or a towed howitzer, then break the outline.
Add clutter. Add netting that distorts edges. Park near junk.
Make the model hesitate. Computer vision loves clean shapes. War should give it garbage.
Thermal deception matters too. If the model keys off heat signatures, then give it competing heat sources. Burn barrels. Engine heaters. Fake exhaust plumes.
Create scenes where the “hottest, most important thing” is not actually the target. Now the model has to choose, and every bad choice becomes future training data if it gets captured.
Then there’s repetition.
If the same false patterns show up again and again, in different locations, under different conditions, the model’s internal assumptions start to drift.
It either becomes overconfident in the wrong things or overly cautious across the board. Neither is good for the attacker. A model that hesitates or misclassifies at the wrong moment is a dead drone.
Ukraine needs to make the battlefield itself a hostile training environment.
The fight is now shaping what the drone believes is true before it ever commits to the attack.
That’s a very modern kind of warfare. I think this type of thinking (how to beat AI targeting) might just be the next big military occupational specialty between now and 2030. And remember… this is defeating AI targeting on the drone, not to be confused with campaign-level AI-targeting like Maven. I’ve discussed Maven before (specifically in Iran) and that targeting would be significantly more difficult to fool.
But let’s slow down for a sec. The word “AI” consistently produces irrational responses in the media, in Congress, and in defense procurement circles… and the Lancet story is already generating some of that heat.
HUR’s own reporting around the March 16 Kyiv incident reflects some uncertainty; Ukrainian officials were divided on whether a Lancet with autonomous capabilities was even involved, with some suggesting Russia may have deliberately dropped drone fragments as part of an information operation.
That caveat is important. Russia has every incentive to make Ukraine believe its drones are more capable than they are.
Psychological deterrence through capability exaggeration is a legitimate Russian information warfare tactic. Maskirovka at its finest…
The honest assessment of the Lancet AI situation is this: Russia may be attempting to add elements of autonomous terminal guidance to a weapon that has been effective without it.
If those attempts mature, Ukraine’s electronic warfare advantage in the terminal phase degrades.
If those attempts remain unreliable, (which is entirely possible, given how brittle battlefield computer vision tends to be in real-world conditions) Russia has added cost and complexity to a weapon without proportionate gain. In other words, they’re wasting their hard earned money.
The truth probably sits somewhere in between and Ukraine has to plan for the version that works.
The answer is disciplined deception, dense decoys, thermal lies, kinetic cheap kills, mobility, and ruthless supply-chain warfare against the foreign electronics keeping Russia’s autonomy stack operational.
So, the answer to AI targeting may not be better AI. Sometimes the answer is engineering reality to be too messy for the AI to understand.
That, frankly, is a very Ukrainian way to solve a problem.
Слава Україні!
Hey friends, Wes here. If you’re new, I write about the Ukraine War and military technology in general (with a side of dark humor). I’m a veteran of two branches of the US military, have a J.D., MBA, and a degree in international relations. I speak bad Russian, occasional talking head on CNN, and I have a YouTube channel about to hit 150,000 subscribers. It would be awesome to see you over there!
Stay frosty. -W





About the idea of decoys - this kind of idea was significant in WWII. The magicians and artists have a role in war, too. https://www.bbc.com/culture/article/20210223-the-artists-who-outwitted-the-nazis
Thank you, a very good article! Yes, all good side countries have to up the game to identify and track what entities are sending this high tech to the bad side.
Your description of the Ukrainian CP’s - tents, antennas and all maybe should be rearranged with those decoy “target” as astutely described above.