By Stephen Chen
Published by South China Morning Post, 13 June 2022
The team ran thousands of simulated space battles in which the hunters developed the ability to ‘trick’ their target
Researchers believe there will be no role for humans in this type of conflict, with AI being used to power both hunter and prey
A research team in China said that an anti-satellite artificial intelligence system has mastered the art of deception in a simulated space battle.
In the experiment, the AI commanded three small satellites to approach and capture a high-value target, repeating the exercise thousands of times.
Eventually the targeted satellite learned to detect the incoming threat and fired up powerful thrusters to evade the pursuit.
But it was then lured into a trap after the AI ordered the three hunters to veer off their original trajectory, as if giving up the pursuit.
One of the hunting satellites then suddenly changed course and deployed a capturing device from a distance of less than 10 metres (33 feet).
“This is spectacular,” said lead scientist Dang Zhaohui, professor of astronautics from Northwestern Polytechnical University in Xian, in a paper published in domestic peer-reviewed journal Aerospace Shanghai on April 25.
Dang and his colleagues said that going after a large target in orbit was not as easy as most people thought.
Most past studies treated the pursuit as a mathematical optimisation problem, assuming the target to be slow, dumb and blind.
But in real life, powerful countries guard their valuable space assets with surveillance networks and early warning sensors, while AI can also be used to protect satellites.
Dang and his collaborators from the Shanghai Institute of Aerospace System Engineering said there would be no place for humans in the new type of space war they envisaged, letting AI control both the targets and hunters.
The researchers let AI play the game of pursuit and evasion repeatedly without human intervention.
At the end of each round, an AI “critic” evaluated the outcome, giving out rewards and penalties.
Consuming more fuel, spending too long or colliding with a teammate, for instance, led to penalties for the hunters, but rewards for their target.
Both sides performed poorly in the first 10,000 rounds of training, with the total number of penalties far exceeding the rewards, according to the study.
The hunting satellites learned faster, “probably because they worked as a group”, and secured an advantageous position after about 20,000 rounds, said Dang.
But the targeted satellite gradually recognised the simple tactics used by the hunters and became better at avoiding pursuit.
Under the pressure of repeated defeats, the hunting AI reversed the game by developing much more sophisticated tactics including collaboration, forward planning and deception that significantly increased the chance of successful capture.
After more than 220,000 rounds of training, the target was left with “no room for mistakes”, according to Dang’s team.
However, some scientists have warned that AI has turned space into a more dangerous place.
“The application of artificial intelligence in space will have a disruptive impact on global strategic stability,” Cai Cuihong, an associate professor of international relations at Fudan University, and her colleagues wrote in a paper published in the Chinese-language Journal of International Security Studies last month.
“AI can make anti-satellite measures more precise, destructive and harder to trace, increasing the likelihood that some countries will carry out a ‘pre-emptive’ strike.
“Attacking satellites, especially early warning satellites, is often seen as a precursor to nuclear war,” she added, calling for the global community to develop laws to regulate AI activity in space.
Last year China complained to the United Nations that two SpaceX Starlink satellites had come dangerously close to the country’s Tiangong space station, forcing it to take evasive action.
Military researchers have subsequently said the country needs to develop the ability to destroy or disable the satellites if they threaten its security.
obal community needs a law to regulate AI activities in space, said the researchers.
See: Original Article