Regulating ChatGPT - The European Liability Regime for Large Language Models
More news about the topic
ChatGPT and other AI large language models (LLMs) raise many of the regulatory and ethical challenges familiar to AI and social media scholars: They have been found to confidently invent information and present it as fact. They can be tricked into providing dangerous information even when they have been trained to not answer some of those questions. Their ability to mimic a personalized conversation can be very persuasive, which creates important disinformation and fraud risks.
They reproduce various societal biases because they are trained on data from the internet that embodies such biases, for example on issues related to gender and traditional work roles. Thus, like other AI systems, LLMs risk sustaining or enhancing discrimination perpetuating bias, and promoting the growth of corporate surveillance, while being technically and legally opaque.
Like social media, LLMs pose risks associated with the production and dissemination of information online that raise the same kind of concern over the quality and content of online conversations and public debate. All these compounded risks threaten to distort political debate, affect democracy, and even endanger public safety. Additionally, OpenAI reported an estimated 100 million active users of ChatGPT in January 2023, which makes the potential for a vast and systemic impact of these risks a considerable one.
LLMs are also expected to have great potential. They will transform a variety of industries, freeing up professionals’ time to focus on different substantive matters. They may also improve access to various services by facilitating the production of personalized content, for example for medical patients or students. Consequently, one of the critical policy questions LLMs pose is how to regulate them so that some of these risks are mitigated while still encouraging innovation and allowing their benefits to be realized.
This Essay examines this question, with a focus on the liability regime for LLMs for speech and informational harms and risks in the European Union.
www.praeventionstag.de