In addition, proposals for legislative intervention to impose additional obligations by ways of enacting law on smart contracts for the operator of the AI system to comply with an emerging body of standards on the ethical use of AI may also have a rebalancing effect, even if those proposals and standards are not specifically addressed to the trade context.
Explained: Why Artificial Intelligence in Trade Requires Smart Legal Regimes?
New Delhi (ABC Live): The increased and expanding use of artificial intelligence (AI) is transforming the global economy. According to the 2019 Digital Economy Report AI could generate additional global economic output of around $13trillion by 2030, contributing an additional 1.2 per cent to annual global growth in gross domestic product.
Thanks to the availability of large quantities of data, improvements in processing power and advanced algorithms, AI is being used by traders to develop and deploy software and hardware systems that represent the next generation of automation (sometimes referred to “intelligent automation”).
AI is transforming trade not just in terms of new products and services being traded, but also in terms of increased efficiencies in trade-related activities such as supply chain management, the marketing of goods and services (including via online platforms), and the formation and performance of contracts.
Why Legal Regime for Smart Contracts Urgently Requires for Artificial Intelligence in Trade?
Where AI is used in trade, a contractual relationship may exist between the person developing the AI system and the person deploying or operating the system (e.g., a contract for the development of an AI system) or between the person operating the AI system and an affected person (e.g., an end-user agreement for the supply of AI-enabled services).
In both of these cases, the distinguishing features of AI can present difficulties in applying existing contract law rules, particularly with regard to establishing the existence of breach of contract and establishing causation of harm arising from the use of the AI system.
Lack of information about the algorithm running an AI system and the data processed may make it difficult for a party claiming breach to establish a correlation between the inputs and outputs of the system (sometimes referred to as the “black box” problem). For instance, the difficulty may be in establishing whether the party providing the AI-enabled service has performed what it undertook to perform according to the terms of the contract. The issue is compounded by contracts framing performance parameters (as that term is used in the Notes on the Main Issues of Cloud Computing Contracting) in broad terms.
Lack of information may also make it difficult for the party to establish that the breach was the cause of harm for the purposes of establishing contractual liability. For instance, the difficulty may be in establishing whether damage or injury suffered was caused by the operation of the AI system itself, as opposed to the quality of the data processed by the AI system that is provided by a third party (or indeed by the party claiming breach).
The above mentioned difficulties have the potential to shift the balance between contracting parties in the traditional sale context by putting the seller/supplier in a stronger position vis-à-vis the purchaser. Proposals have been put forward for re balancing through education of the parties (e.g., the development of model contract provisions and good practice guides). In addition, proposals for legislative intervention to impose additional obligations by ways of enacting law on smart contracts for the operator of the AI system to comply with an emerging body of standards on the ethical use of AI may also have a rebalancing effect, even if those proposals and standards are not specifically addressed to the trade context.