Law and Technology

How to revisit the human angle in data driven space governance

A recent paper explores the dilemmas confronting legal frameworks around AI enabled satellites and spacecraft. But the paper stops short of practical prescriptions.

A RECENT ACADEMIC PAPER IN ACTA ASTRONAUTICAThe protection of AI-based space systems from a data-driven governance perspective,” authored by Giovanni Tricco and eleven others attempts to tackle a complex and timely topic: how our existing legal frameworks must adapt to be more autonomous, when it comes to AI-enabled satellites and spacecraft. Their work is welcome academic development on data collection, cybersecurity, and IP issues at the crossroads of space technology and law. 

The paper appreciates that AI will revolutionise space missions and underscores the need for legal imperatives. However, while the authors chart useful technical and legal terrain, their view(s) is largely technocratic and optimistic. In doing so, they overlook some hard realities. In particular, the study skirts the national-security and diplomatic fault-lines that data-sharing can possibly unlock, glosses over what norms and objectives must truly guide AI-governance, and stops short of practical prescriptions for AI-specific threats. 

Below I highlight these gaps, and then suggest a missing ingredient: a “protocol of protocols” anchoring human oversight in any AI-space regimen.

Data sharing’s political shadow

The paper rightly praises international cooperation – joint Mars missions, open Earth-observation data, and shared satellite platforms – as engines of scientific progress and trust. But this narrative stands incomplete. In reality, outer space sits not only in the realm of collaboration but also of great power rivalry. Military and intelligence applications of AI-driven satellites raise political and security stakes that the paper almost entirely ignores. For example, real-time high-resolution imagery from AI-enabled Earth-observation satellites can be intelligence gold. The U.S., Russia, China, and others carefully guard much of this data. 

Satellite communications and GPS technology have clear dual-use character: they can serve civilian needs or be repurposed for military command and control. Yet the paper’s discussion of data sharing treats it as an unambiguous good. It elides the fact that states often restrict data exchange for national-security reasons. What do diplomatic tensions over space data look like? The authors do not say.

In particular, the study skirts the national-security and diplomatic fault-lines that data-sharing can possibly unlock, and stops short of practical prescriptions for AI-specific threats.

Equally, there is little attention to the diplomatic dimension of data standards. Who sets the rules for sharing? 

For instance, if the EU or the U.S. demand high cybersecurity protocols, but Russia or China insist on sovereignty over their AI systems’ data, how will this friction be resolved? The paper hints at international cooperation but never wrestles with the banal mistrust that often underlies it. A robust critique would note that any proposal for data-driven governance in space must acknowledge that states view satellites as strategic assets. Without diplomatic groundwork, even the most sophisticated legal mechanisms may not yield result due to the political unwillingness towards sharing sensitive information. In short, analysis should balance rosy semblance scenarios of global commons against the very real possibility of geopolitical gridlock. Tricco and others stop short of this balance, which is a notable omission.

Legal clarity and normative vision

Closely tied to data diplomacy is the need for clear laws and shared objectives. Tricco and others point to existing space treaties (like the Outer Space Treaty), and call for international collaboration and perhaps new frameworks. The paper urges ethical principles – fairness, transparency, privacy – as guiding stars. But these remain broad slogans. Missing from the paper is a discussion of which core values society should prioritise in this domain. 

Should our goal be an open-access data regime to advance science at all costs? Or should we prioritise national defense imperatives and risk-control? How do we weigh the economic gains of AI satellites against existential concerns (for instance, a hacked AI navigating a satellite into conflict)? The article does not unpack these questions.

Legislatively speaking, the paper notes national privacy laws like the GDPR or California’s data law and says states need “comprehensive legislation” on space data. Yet it does not examine what such laws might actually look like, or how they interact. In hindsight, it overlooks how current and upcoming AI regulations often carve out national security uses. For example, Europe’s AI Act explicitly excludes military and security systems. This means a big chunk of space AI – being developed by defense agencies – may evade civilian rules. The authors do not analyse this loophole or its diplomatic consequences. 

In practice, we might need treaties, norms or imperatives that bridge civilian data laws and military secrecy. In other words, the paper raises the banner of “international collaboration” but does not define the destination or roadmap. A sharper analysis would challenge readers: what normative framework do we want? Are we aiming for “AI in space under human control above all” or “free flow of data to accelerate innovation” – because those aims are not the same. 

Without clarifying the desired outcome, talk of more regulations and collaboration remains a bit abstract.

The paper speaks of due diligence, encryption, and “periodic audits,” but these are not far from boilerplate. What about specific AI threats? 

Cybersecurity: Strong diagnosis, soft prescriptions

One of the paper’s strengths is its overview of cybersecurity challenges for AI-enabled spacecraft. It correctly identifies the stakes – that a hacked or spoofed AI could cause collisions or disable critical services – and it links these to state responsibility under space law. The authors explain traditional rules (such as the liability regime for space objects) and adapt them to cyberspace dilemmas. This review can help lawyers and scholars get up to speed on space law basics applied to AI.

But, when it comes to prescriptions, the analysis stays on generalities. The paper speaks of due diligence, encryption, and “periodic audits,” but these are not far from boilerplate. What about specific AI threats? 

For instance, adversarial attacks – deliberately feeding wrong images or signals into an AI to mislead a satellite – are not mentioned. There is no discussion of best practices for training space-AI on secure data, or for building redundancy into critical decision loops. Similarly, supply chain risks (tampered hardware for AI chips) or jamming/denial-of-service attacks on AI sensors get a passing nod at most. After laying out the many ways AI could fail or be attacked, the paper does not say, for example, “Here is a concrete framework for vetting software used in satellites” or “Here are model national standards for AI system certification.” This gap means the paper feels more like a threat catalogue than a crisis response plan.

Pragmatically, lawyers and regulators need clear guidance: What actions should industry and governments take now? The critique should be a little specific. Are we to trust that generic calls for “secure protocols” are enough? In reality, agencies may soon debate things like required kill-switches on autonomous satellites, or international channels for reporting cyber incursions. 

These possible measures are missing. To be fair, formulating such measures is challenging, but it is precisely what a forward-looking policy critique should attempt. The paper raises alarms but stops short of sounding actionable next steps for our AI-laden, adversarial world.

With AI assistance and it rocketing into orbit, we must ensure our laws and policies keep our feet (and minds) firmly on the ground.

Beyond code: A “Protocol of Protocols”

Perhaps the inquisitive omission is the failure to suggest a new paradigm that blends law and human values. The paper remains anchored in treaties, industry standards, and technical solutions. It neglects the human governance dimension: how do we ensure that space AI never drifts into an uncontrollable realm? Here I propose thinking of a “protocol of protocols” – a meta-level governance layer that embeds human oversight and shared values into every technical standard or treaty.

What might this look like? Let's imagine an international agreement that requires every AI-controlled spacecraft to incorporate a specific oversight mechanism – say for example, a human-in-the-loop veto or periodic performance audit by an appointed body. It’s like how every internet protocol includes encryption rules by mandate; here, every space protocol (navigation, communication, data sharing) would be conditioned on an overarching accountability rule. 

For example, before disclosing classified radar data to a foreign partner, the sending state’s AI system might be required to log why it accessed that data, with logs reviewed by an independent entity. Or for an AI-guided satellite mission, there could be a treaty clause that an on-call human operator must have authority to intervene.

This “protocol of protocols” approach has precedents on Earth. In nuclear governance, for instance, stringent international regimes require human safeguards despite the technology’s power. Similarly, cyber norms are emerging that insist on transparency measures (like threat-sharing agreements) above and beyond any single system’s code. Extending that thinking to AI space systems, one could envision a governance architecture that goes beyond code and algorithms to mandate shared monitoring and reporting frameworks. Such a layer would remind us that, ultimately, human judgment and diplomacy govern space as much as ones and zeroes do.

Tricco and others allude to ethics and transparency, but they do not spell out a mechanism for enforcing those ideas. By contrast, a “protocol of protocols” gives teeth to abstract principles: it says, for example, “no matter what hardware or software controls a spacecraft, there must be a built-in human review step”. This could take shape as an international charter, an industry consortium’s pledge, or even a requirement in export control lists (if a satellite uses certain AI algorithms, it must meet oversight criteria). It is admittedly a tall order, but given the stakes – a runaway AI spacecraft affecting global security – it is a missing ingredient that goes beyond the paper’s mostly techno-legal toolkit.

Charting a human-centered course

Tricco and colleagues have opened an important conversation on AI and space data. Their contribution is valuable as a survey of current issues: they map out how existing laws touch on AI in orbit, and they lay some groundwork for thinking about cybersecurity and data governance. Yet this critique has sought to show that the real world demands more. National security, diplomacy, and human values are not peripheral to space law – they are central. An interdisciplinary challenge like AI in space needs not only robust definitions of “data” and “cybersecurity,” but also clarity on who decides, based on what norms, and with what oversight.

On a concluding note, moving forward, legal scholars and policymakers must inject hard-headed realism into this discourse. We should press the question(s) of what ends our AI governance serves (on safety? openness? equity?), and build frameworks accordingly. 

International collaboration is indeed vital, but it will only take us so far without mutual trust – something that only comprehensive norms and reciprocated transparency can build. And where the authors settle on general calls for ethics and collaboration, let us push them further: can we draft a “protocol of protocols” that stitches human accountability into an AI mission plan? Such ideas might feel ambitious, but given how high the stakes are in space, only ambitious thinking will do.

With AI assistance and it rocketing into orbit, we must ensure our laws and policies keep our feet (and minds) firmly on the ground. Only then can data-driven space systems truly be governed in the interests of all.