Thank you, a very good article! Yes, all good side countries have to up the game to identify and track what entities are sending this high tech to the bad side.
Your description of the Ukrainian CP’s - tents, antennas and all maybe should be rearranged with those decoy “target” as astutely described above.
The illusionist Neville Maskelyne was employed by British intelligence during WW2 to create illusions at scale, including moving Alexandria harbour to somewhere it wasn’t, and creating a ghost armoured force to the south of the Alamein line while hiding the real armoured force to the north. These illusions essentially used decoys to convince the enemy he was seeing what he was predisposed to see and what Maskelyne wanted him to see and not what was really there. A more intriguing illusion to defend the Suez Canal from German bombers which could with a single strike sink a ship and block this vital logistics route, was to make the Suez Canal impossible for German bomber pilots to accurately locate at night. This used an altogether different approach, reliant on specially equipped searchlights along the length of the canal about 30 miles apart (I think) to generate rotating dazzle effects to completely disorient any approaching pilot. So effective was this technique that Maskelyne’s test flight allegedly nearly came to grief attempting to fly through it. Assuming AI modules (or indeed FPV drones) are no longer using inertial or GPS guidance in the terminal phase but are instead using visual or infrared cues to identify targets, perhaps an updated version of something like this could be used to overload the visual/IR sensors of incoming drones?
Thanks, Wes. Is the world market just too porous to stop advanced technology from entering Russia? It may be a little outside your wheelhouse, but I'd be interested in what steps the West isn't taking right now - but should - to shut down the trafficking of Western tech to Orc-land. Any thoughts?
Hey Craig, yeah it's a huge problem. Probably frustrates the hell out of the Ukrainians. It’s coming through middlemen, shell companies, and third-country brokers in places like the UAE, Turkey, and parts of Asia. That’s the gap.
What the West isn’t doing aggressively enough is enforcement. We’ve got the sanctions on paper. What we don’t have is consistent, teeth-baring follow-through like blacklisting intermediaries faster, going after financial networks, and holding companies accountable when their components keep showing up in Russian systems. This isn’t a tech problem. It’s a willpower problem. In my mind, Nvidia is on the hook here and should be held liable if their AI boards show up on Ruski junk.
So, we have the tools, but choose not to use them. Whether Mammon, the Golden Calf, Shaytan or other, the allure of wealth has worshipers across the globe.
That's a good question, Lou. Depending on what survived the crash, Ukraine, in theory, should be able to extract the training data from the destroyed Lancet, assuming the board is still intact. I imagine Russia wouldn't waste an AI-integrated Lancet on something large that they could strike with a conventional munition. But it's possible they don't fully realize the AI's potential.
The answer to autonomy drones is disciplined deception, dense decoys, thermal lies, kinetic cheap kills, mobility, and ruthless supply-chain warfare against the foreign electronics keeping Russia’s autonomy stack operational.
AI Needs Order. War Produces Chaos.
Reading time: 5–6 min
AI isn’t the story. Control is.
What Russia is attempting with autonomous targeting in Lancet drones is not a technological leap in the way it is often described. It is a structural adaptation.
For two years, Ukraine has exploited a simple vulnerability: the link between operator and weapon.
Jam the signal. Break the feed. Sever the connection.
Make the system blind.
Autonomy — even in a limited, terminal form — is an attempt to remove that weakness.
Not to create a “thinking drone”, but to ensure that the weapon can complete its task even when the human link is degraded.
That matters.
But not in the way the headlines suggest.
Autonomy does not remove friction. It relocates it.
An AI-guided system does not operate freely. It depends on three things:
training data
hardware supply chains
assumptions about what the battlefield looks like
Each of these introduces a new point of failure.
A model can only recognise what it has been trained to see. It looks for patterns, signatures, regularities.
And war, at its most effective, removes exactly those things.
Ukraine has not won advantages in this war by being more technologically advanced in every domain.
It has done something more fundamental.
It has made the battlefield difficult to interpret.
Messy. Deceptive. Unstable.
That is not accidental. It is a method.
The fight is shifting: from breaking the link to corrupting perception.
If a system depends on visual recognition, then visibility becomes a vulnerability.
If a model needs stable patterns, those patterns can be broken.
If it seeks clarity, it can be fed noise.
Camouflage is no longer just concealment. It is distortion.
Decoys are no longer secondary tools. They are part of the targeting environment itself.
Mobility is not just protection. It is a way of denying the system the time required to become certain.
The objective is not only to survive the strike. It is to degrade the system’s ability to decide.
A system that cannot decide is a system that cannot strike effectively.
This is where the broader strategic pattern becomes visible.
Across both Ukraine and the Gulf, the same dynamic is emerging:
The side that depends on precision must preserve clarity. The side that endures only needs to sustain disruption.
Russia’s move toward autonomy is, in part, an attempt to restore precision under degraded conditions.
But that creates a new dependency — on perception.
And perception is fragile.
In the Strait of Hormuz, the objective is not perfect targeting. It is persistent uncertainty.
Mines, drones, missile threats — none of these need to function flawlessly.
They only need to make the environment unstable enough that navigation, insurance, and logistics begin to fail.
The same logic now appears on the battlefield.
Not the destruction of systems. But the erosion of their ability to interpret reality.
The side that needs precision must see clearly. The side that endures only needs to make vision fail.
This is why the current discussion of “AI in warfare” often misses the point.
The question is not whether machines can make decisions.
The question is whether those decisions can be made reliably in an environment designed to confuse them.
So far, Ukraine has shown a consistent ability to shape that environment.
Not by outbuilding systems.
But by making them less effective.
AI does not make war autonomous.
It makes perception decisive.
And perception, in war, is always easier to break than to build.
If you found this analysis useful, consider subscribing. This publication focuses on the strategic logic behind war, power, and international politics.
About the idea of decoys - this kind of idea was significant in WWII. The magicians and artists have a role in war, too. https://www.bbc.com/culture/article/20210223-the-artists-who-outwitted-the-nazis
Thank you, a very good article! Yes, all good side countries have to up the game to identify and track what entities are sending this high tech to the bad side.
Your description of the Ukrainian CP’s - tents, antennas and all maybe should be rearranged with those decoy “target” as astutely described above.
The illusionist Neville Maskelyne was employed by British intelligence during WW2 to create illusions at scale, including moving Alexandria harbour to somewhere it wasn’t, and creating a ghost armoured force to the south of the Alamein line while hiding the real armoured force to the north. These illusions essentially used decoys to convince the enemy he was seeing what he was predisposed to see and what Maskelyne wanted him to see and not what was really there. A more intriguing illusion to defend the Suez Canal from German bombers which could with a single strike sink a ship and block this vital logistics route, was to make the Suez Canal impossible for German bomber pilots to accurately locate at night. This used an altogether different approach, reliant on specially equipped searchlights along the length of the canal about 30 miles apart (I think) to generate rotating dazzle effects to completely disorient any approaching pilot. So effective was this technique that Maskelyne’s test flight allegedly nearly came to grief attempting to fly through it. Assuming AI modules (or indeed FPV drones) are no longer using inertial or GPS guidance in the terminal phase but are instead using visual or infrared cues to identify targets, perhaps an updated version of something like this could be used to overload the visual/IR sensors of incoming drones?
Thanks, Wes. Is the world market just too porous to stop advanced technology from entering Russia? It may be a little outside your wheelhouse, but I'd be interested in what steps the West isn't taking right now - but should - to shut down the trafficking of Western tech to Orc-land. Any thoughts?
Hey Craig, yeah it's a huge problem. Probably frustrates the hell out of the Ukrainians. It’s coming through middlemen, shell companies, and third-country brokers in places like the UAE, Turkey, and parts of Asia. That’s the gap.
What the West isn’t doing aggressively enough is enforcement. We’ve got the sanctions on paper. What we don’t have is consistent, teeth-baring follow-through like blacklisting intermediaries faster, going after financial networks, and holding companies accountable when their components keep showing up in Russian systems. This isn’t a tech problem. It’s a willpower problem. In my mind, Nvidia is on the hook here and should be held liable if their AI boards show up on Ruski junk.
So, we have the tools, but choose not to use them. Whether Mammon, the Golden Calf, Shaytan or other, the allure of wealth has worshipers across the globe.
Curious what are Russians targeting with Lancets? If it’s civilian apartments or power plants that’s going to be much harder to spoof I would imagine
That's a good question, Lou. Depending on what survived the crash, Ukraine, in theory, should be able to extract the training data from the destroyed Lancet, assuming the board is still intact. I imagine Russia wouldn't waste an AI-integrated Lancet on something large that they could strike with a conventional munition. But it's possible they don't fully realize the AI's potential.
The answer to autonomy drones is disciplined deception, dense decoys, thermal lies, kinetic cheap kills, mobility, and ruthless supply-chain warfare against the foreign electronics keeping Russia’s autonomy stack operational.
AI Needs Order. War Produces Chaos.
Reading time: 5–6 min
AI isn’t the story. Control is.
What Russia is attempting with autonomous targeting in Lancet drones is not a technological leap in the way it is often described. It is a structural adaptation.
For two years, Ukraine has exploited a simple vulnerability: the link between operator and weapon.
Jam the signal. Break the feed. Sever the connection.
Make the system blind.
Autonomy — even in a limited, terminal form — is an attempt to remove that weakness.
Not to create a “thinking drone”, but to ensure that the weapon can complete its task even when the human link is degraded.
That matters.
But not in the way the headlines suggest.
Autonomy does not remove friction. It relocates it.
An AI-guided system does not operate freely. It depends on three things:
training data
hardware supply chains
assumptions about what the battlefield looks like
Each of these introduces a new point of failure.
A model can only recognise what it has been trained to see. It looks for patterns, signatures, regularities.
And war, at its most effective, removes exactly those things.
Ukraine has not won advantages in this war by being more technologically advanced in every domain.
It has done something more fundamental.
It has made the battlefield difficult to interpret.
Messy. Deceptive. Unstable.
That is not accidental. It is a method.
The fight is shifting: from breaking the link to corrupting perception.
If a system depends on visual recognition, then visibility becomes a vulnerability.
If a model needs stable patterns, those patterns can be broken.
If it seeks clarity, it can be fed noise.
Camouflage is no longer just concealment. It is distortion.
Decoys are no longer secondary tools. They are part of the targeting environment itself.
Mobility is not just protection. It is a way of denying the system the time required to become certain.
The objective is not only to survive the strike. It is to degrade the system’s ability to decide.
A system that cannot decide is a system that cannot strike effectively.
This is where the broader strategic pattern becomes visible.
Across both Ukraine and the Gulf, the same dynamic is emerging:
The side that depends on precision must preserve clarity. The side that endures only needs to sustain disruption.
Russia’s move toward autonomy is, in part, an attempt to restore precision under degraded conditions.
But that creates a new dependency — on perception.
And perception is fragile.
In the Strait of Hormuz, the objective is not perfect targeting. It is persistent uncertainty.
Mines, drones, missile threats — none of these need to function flawlessly.
They only need to make the environment unstable enough that navigation, insurance, and logistics begin to fail.
The same logic now appears on the battlefield.
Not the destruction of systems. But the erosion of their ability to interpret reality.
The side that needs precision must see clearly. The side that endures only needs to make vision fail.
This is why the current discussion of “AI in warfare” often misses the point.
The question is not whether machines can make decisions.
The question is whether those decisions can be made reliably in an environment designed to confuse them.
So far, Ukraine has shown a consistent ability to shape that environment.
Not by outbuilding systems.
But by making them less effective.
AI does not make war autonomous.
It makes perception decisive.
And perception, in war, is always easier to break than to build.
If you found this analysis useful, consider subscribing. This publication focuses on the strategic logic behind war, power, and international politics.