How to rein in the AI threat? Let the lawyers loose

Are you worried about the threat of AI? We look at four benefits of slowing down progress, and some concrete steps to implement legal liability in AI development.

55% of Americans are worried by the threat of AI to the future of humanity, according to a recent Monmouth University poll.

In an era where technological advancements are accelerating at breakneck speed, it is crucial to ensure that artificial intelligence (AI) development remains in check. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications. 

And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models.

In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.

However, the problem with these proposals is that they require coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability. 

By leveraging legal liability, we can effectively slow AI development and make certain that these innovations align with our values and ethics. We can ensure that AI companies themselves promote safety and innovate in ways that minimize the threat they pose to society.

We can ensure that AI tools are developed and used ethically and effectively, as I discuss in depth in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.

Why legal liability is a vital tool for regulating AI development

Section 230 of the Communications Decency Act has long shielded internet platforms from liability for content created by users. However, as AI technology becomes more sophisticated, the line between content creators and content hosts blurs, raising questions about whether AI-powered platforms like ChatGPT should be held liable for the content they produce.

The introduction of legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. They will be forced to internalize what economists call negative externalities, meaning negative side effects of products or business activities that affect other parties.

A negative externality might be loud music from a nightclub bothering neighbors. The threat of legal liability for negative externalities will effectively slow down AI development, providing ample time for reflection and the establishment of robust governance frameworks.

To curb the rapid, unchecked development of AI, it is essential to hold developers and companies accountable for the consequences of their creations. Legal liability encourages transparency and responsibility, pushing developers to prioritize the refinement of AI algorithms, reducing the risks of harmful outputs, and ensuring compliance with regulatory standards.

For example, an AI chatbot that perpetuates hate speech or misinformation could lead to significant social harm. A more advanced AI given the task of improving the stock of a company might – if not bound by ethical concerns – sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.

Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It’s aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.

Four benefits of slowing down AI development

Here are four benefits of slowing down AI development.

1) Ensuring ethical AI

By slowing down AI development, we can take a deliberate approach to the integration of ethical principles in the design and deployment of AI systems. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.

2) Avoiding technological unemployment

The rapid development of AI has the potential to disrupt labor markets, leading to widespread unemployment. By slowing down the pace of AI advancement, we provide time for labor markets to adapt and mitigate the risk of technological unemployment.

3) Strengthening regulations

Regulating AI is a complex task that requires a comprehensive understanding of the technology and its implications. Slowing down AI development allows for the establishment of robust regulatory frameworks that address the challenges posed by AI effectively.

4) Fostering public trust

Introducing legal liability in AI development can help build public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can foster a positive relationship with the public, paving the way for a responsible and sustainable AI-driven future.

Concrete steps to implement legal liability in AI development

Here are some concrete steps to implement legal liability in AI development.

Clarify Section 230

Section 230 does not appear to cover AI-generated content. The law outlines the term “information content provider” as referring to “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

The definition of “development” of content “in part” remains somewhat ambiguous, but judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies “pre-populated answers” so that it is “much more than a passive transmitter of information provided by others.”

Thus, it’s highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter.

By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.

Establish AI governance bodies

In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations, and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.

Encourage collaboration

Fostering collaboration between AI developers, regulators, and ethicists is vital for the creation of comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that strike a balance between innovation and responsible AI development.

Educate the public

Public awareness and understanding of AI technology are essential for effective regulation. By educating the public on the benefits and risks of AI, we can foster informed debates and discussions that drive the development of balanced and effective regulatory frameworks.

Develop liability insurance for AI developers

Insurance companies should offer liability insurance for AI developers, incentivizing them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.

We need to address the ethical and legal implications of AI development

The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations, and minimizes the risks associated with these emerging technologies.

It is essential that developers, companies, regulators, and the public come together to chart a responsible course for AI development that safeguards humanity’s best interests and promotes a sustainable, equitable future.

Dr. Gleb Tsipursky helps tech and finance industry executives drive collaboration, innovation, and retention in hybrid work. He serves as the CEO of the boutique future-of-work consultancy Disaster Avoidance Experts

He is the best-selling author of seven books, including Never Go With Your Gut and Leading Hybrid and Remote Teams. His cutting-edge thought leadership was featured in over 650 articles in prominent venues such asHarvard Business Review, Fortune, and Forbes

His expertise comes from over 20 years of consulting for Fortune 500 companies from Aflac to Xerox and over 15 years in academia as a behavioral scientist at UNC-Chapel Hill and Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio.