Does artificial intelligence need a constitution of its own?

This piece by Amit Kumar mulls the possibilities and pitfalls of developing a set of rules for artificial intelligence based on universal human values akin to a constitution.

IN science fiction, superhuman intelligence taking over human civilisations has been oft repeated trope.

However, while the doomsday perils of an all-powerful artificial intelligence (AI) remain speculative, it is undeniable that the ascent of AI in comparison to the pace of development of all newly emergent technologies has been singularly dramatic.

Moreover, the advent of AI and the multiple applications it is capable of animating has precipitated profound transformations in the ways in which we think, act, process, trade and much more.

Today, AI systems are rapidly proliferating. They engage various aspects of human activity and are becoming increasingly powerful and invasive. They have definitely outgrown a purely technical or mechanical manifestation and the trend is likely to continue.

The all-pervasive nature and cross-border applications of AI has opened a Pandora’s box of questions related to ethical ambiguities, harm, biases, trust and accountability. Such challenges have prompted inquiries into whether AI should be governed on the basis of certain fundamental value orientations.

Constitutional Artificial Intelligence (CAI) signifies the integration of AI with universal values and constitutional principle and their overall interoperability, both with and within constitutional frameworks.

At the core of such questions is a growing realisation that as AI technology evolves and becomes increasingly intertwined with various aspects of our lives, an ethical governance of technology becomes essential to ensure that such innovations maximise gains for the society while minimising potential harms.

In this backdrop, constitutional artificial intelligence (CAI) emerges as a pivotal discourse. CAI signifies the integration of AI with universal values and constitutional principle and their overall interoperability, both with and within constitutional frameworks.

Also read: Generative AI and the copyright conundrum

It presses on the need to move beyond technical control and management of AI systems and applications and bring them within the purview of ethical and constitutional governance.

CAI thus operates at an intersection of AI driven technological advancements and the imperatives to align them with fundamental human rights and values by embedding ethical–juridical sensibilities in their techno-architectural framework to ensure a just, fair and non-discriminatory AI decision making.

The imperatives for CAI

CAI entails a distinct shift from the way AI operational processes currently work. AI and large language models today employ machine learning. This implies that they have the ability to self-learn and acquire new knowledge without explicit programming by humans.

Self-learning is achieved by employing tools and techniques such as adaptive machine learning algorithms, neural networks and natural language processing, among others.

Such mechanisms help AI systems to improve over time by finding patterns, understanding data and analysing the outcomes of decisions. It is evident that the inputs they rely on and the informational sources they draw from become important.

As AI capabilities progressively increase and are applied in complex environments and across domains, it becomes even more important that their performances are optimised and moderated with values consistent with humanity to preclude any deleterious consequences in the course of their decision making.

Large language models like ChatGPT, use a feedback mechanism known as ‘reinforcement learning from human feedback’ to moderate decision making and output.

CAI assumes significance in this context by building a framework to integrate certain fundamental constitutional imperatives as rules within the AI operational settings.

Also read: The possibilities and pitfalls of ChatGPT

CAI seeks to balance the often-conflicting dimensions of usefulness and harmfulness. This point deserves attention since enhancing the usefulness of AI decision making often involves tools and techniques which are more invasive and require gaining greater access to personal data that can engender biases and harm.

Currently, large language models like ChatGPT use a feedback mechanism known as reinforcement learning from human feedback to moderate decision making and output. 

Reinforcement learning from human feedback involves the presence of a human moderation interface where some individuals evaluate and rate the AI systems’ output and responses for the presence of negative elements such as aggression, toxicity and racial biases.

The system then ‘learns’ from the feedback to suitably tweak its responses. Reinforcement learning from human feedback, however, has certain limitations such as dependence on human feedback which could vary in terms of quality and scalability in complex situations.

The feedback received could also run the risk of being inherently biased itself. Besides, it also entails significant time and cost overhead, especially with increasingly complex and crosscutting scenarios.

CAI proposes to be an alternate model of AI governance. It aims to alleviate some of the weaknesses of this system by constructing rule-based AI models where constitutional principles are embedded in codes and training datasets as explicit rules.

The constitutional principles provide specific instructions to AI chatbots as to how to handle sensitive requests and how to align itself with human values. Such information is then internalised by the AI systems and underpins any decision making.

Also read: AI induced technology and human rights challenges

In other words, CAI informs the large learning models and trains them to qualify their output in terms of principles enshrined in the constitutional texts, such as rule of law, human rights and non-discrimination.

Embedding constitutional and ethical principles within AI can lead to a layering of artificial intelligence with certain overarching foundational principles, which would inform its decision making and pave the way for a rights-based approach to artificial intelligence.

CAI informs the large learning models and trains them to qualify their output in terms of principles enshrined in the constitutional texts, such as rule of law, human rights and non-discrimination.

Therefore, CAI can be understood as the integration of ethical-legal frameworks within AI systems to ensure that AI decisions making and operational processes are premised upon the bedrock of constitutional principles enshrined in national constitutions and other universally accepted international rights-based instruments.

In fact, efforts are already being made to develop such constitutional frameworks for AI. For example, Anthropic, an AI safety and research company, has already been working on such models to build safe and reliable AI systems by providing its AI applications with a set of rules premised upon constitutional values and principles.

Anthropic has also framed a constitution for its AI platform Claude which draws from the leading universally accepted human rights documents such as the Universal Declaration of Human Rights (UDHR) and contain value inputs enshrined as principles in UDHR such as freedom, equality, fraternity and human dignity.

Similarly, considerations of the Global South and sensibilities of a non-western audience as well as racial and ethnic sensitivities have also been factored in while framing the constitution. These principles are built into AI which then inform the AI backend systems and serve as references against which it can evaluate its responses and decisions.

CAI can also potentially help in making the AI rule-making processes broad-based. Currently a small number of AI companies and startups set the rules for AI operations. Such programming, therefore, may be vitiated by self-interests, commercial considerations or the limited personal or organisational understanding of ethics.

Also read: Confronting cyber homophobia: Lessons from the United Kingdom

In the constitutional governance of AI, rules can be set and vetted by independent third parties who can also cross-verify whether their outputs are in sync with constitutional principles.

Furthermore, after an initial deployment, continuous refinement of AI models can be achieved by integrating feedback from legal and constitutional experts and other relevant stakeholders.

Such feedback loops shall further strengthen and streamline the rules, ensuring that the system evolves while consistently adhering to the fundamental tenets of constitutional law.

Conclusion

CAI is more than just a technical paradigm. It balances AI’s potential for benefiting humankind while adhering to the underlying foundational principles of human societies.

By intertwining constitutional principles with AI mechanisms, CAI can ensure that as intelligent systems become progressively more capable, they do not deviate from the core values that hold our societies together.

As AI-driven decisions are set to increasingly proliferate in many critical domains; researching, understanding and implementing CAI becomes not just valuable but indispensable.

The idea of an independent set of continuously evolving rules at the core of frontend AI decision making is a welcome one.

The technological ecosystem around AI is still evolving and the pairing of a constitutional and rights-based framework with this technology is just beginning to gain ground.

It remains to be seen how effectively such harmonisation can be achieved within the super dynamic AI landscape.

However, the idea of an independent set of continuously evolving rules at the core of frontend AI decision making is a welcome one.

It will certainly democratise and broadbase the AI decision-making systems and align them with universalistic value orientations. It can also potentially respond to the critique of distortions, biases and discriminatory orientation that AI is often accused of producing.