Big Tech’s Race to Develop Superior Artificial Intelligence Technology
Will A.I. Compromise Free Enterprise, Disclosure and Security?
America’s Big Five tech companies – Amazon, Apple, Facebook, Google and Microsoft – are racing to develop technology they claim will change the world — again. The tech Goliaths have more than 33,000 researchers at their disposal to create artificial intelligence (A.I.) technology with an obvious and perpetual prize: revenue.
It’s the talk of the world. NBC Nightly News recently predicted the impacts that A.I. will have on society in the coming years. A.I. tech was also the center of attention at the 2023 Davos Economic Summit. Prominent tech leaders such as Elon Musk and the CEO of OpenAI, Sam Altman, heralded that A.I. will improve virtually everyone’s lives, but with some risks involved.
Andrew Perlman, dean of Suffolk University Law School, says there is nothing “future” about it. In The Implications of ChatGPT for Legal Services and Society, he wrote, “The disruptions from AI’s rapid development are no longer in the distant future. They have arrived …” And for the legal industry, he said, “ChatGPT may portend an even more momentous shift than the advent of the internet.”
Just one legal application out there today is the use of A.I. technology (GPT-3) by Docket Alarm, a popular court docket search service. Docket Alarm allows users to see A.I.-generated summaries of filings without even opening them. Michael Sander, VP of analytics with Docket Alarm owner Fastcase, told legal technology enthusiast Bob Ambrogi that the feature is experimental and should be relied upon with some healthy caution. [Disclosure: HB collaborates with Fastcase in creating litigation content, e.g., the Journal of Emerging Issues in Litigation and the Emerging Litigation Podcast.]
As non-attorney and comic book hero Spiderman famously said, “With great power comes great responsibility.” But will the tech companies (or their algorithms) take responsibility for the rush of legal issues certain to continue from an unregulated A.I. Wild West? Critics say this automated technology has already damaged democratic discourse. A.I.-generated content is easily observed on Twitter and other platforms — flooding the digital town square of public opinion.
An unregulated A.I. race creates myriad legal issues that our lawmakers and our Constitution seem ill-equipped to address — at least quickly. Legal issues to which A.I. will, critics fear, play a role include degradation of free speech and public discourse, increased monopolization, greater economic inequality, and the mass proliferation of copyright infringement.
Damage to Discourse and Democracy
Technology and democracy have historically gone hand in hand, from typesetters allowing printers to produce newspapers and magazines, to famous radio fireside chats with President Roosevelt. A healthy democracy relies on input from its citizens as well as unhindered First Amendment rights for the citizens who utilize technologies to disseminate messages, so long as they do not promote violence or undermine security.
Since 2015, A.I. has increasingly influenced the democratic process in both the United States and abroad. Chatbots — an A.I. technology designed to automate text via algorithms to respond to people’s messages. Bots have been used to repost, repopulate, and generate social media posts on Twitter and other social media sites. Misinformation abounds.
A popular bot is ChatGPT developed by OpenAI, which we used to augment this article. [Editor’s Note: See the photo at the top and writing examples in the sidebar. The rest was drafted by a human being. Or so he claims.]
In a November 2022 op-ed published in Scientific American by A.I. expert Gary Marcus, OpenAI’s ChatGPT was deemed to “sound authoritative, even when it’s wrong, which makes it a perfect tool for mass-producing misinformation.” A Stanford University analytical research paper co-sponsored by the school’s sociology and psychology departments explains that messages generated by ChatGPT are just as capable of persuading readers as human writers are.
A.I. has also been developed to write text for news stories. Blogger Jacob Bergdahl experimented in July 2021 with OpenAI’s GPT-3 bot to generate comical fake news stories about how President Biden’s favorite food was pizza with ice cream on top, how Sweden’s prime minister rode a pig, and the European Union’s investment in onion farms. (Again, those are made up!) Bergdahl said, “I don’t know about you, but I’m equal parts impressed and terrified at how convincingly the algorithm explained these ridiculous topics. To reiterate: I only entered the first sentence of each story, and I didn’t edit the AI’s output in the slightest.”
A.I. has recently been employed to manipulate images, generating a startlingly realistic image of Donald Trump being dramatically arrested in front of a Manhattan federal courthouse in March. Belgian-based journalist Eliot Higgins believes he has since been banned from the image generating platform, Midjourney. The image was on Bellingcat, Higgins’ investigative journalism site. He shared it on Twitter where it went viral; it was shared by millions social media users.
Critics say the challenges to democracy are exacerbated by the Big Five’s hold on the technology.
Monopolization and Free Enterprise Limitations
Text-based A.I. tools are already widely used by mid-sized companies to large corporations, particularly in the form of chatbots. Tech companies like Outreach.io promotes chatbot services to streamline customer service, reduce costs, and reduce labor needs. However many executives are, “proceeding with caution given the limitations of ChatGPT” according to a Wall Street Journal article published this January. Chatbots through ChatGPT and eventually through more advanced A.I. language systems may even convince most customers into believing they are interacting with human beings. The WSJ further reports that, “[w]hile many chatbots are trained to deliver a version of “I don’t know” to requests they cannot compute, ChatGPT, for example, is more likely to spout off a response with complete confidence—even if the information is wrong.”
“[G]enerative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply. — FTC Chair Lina Khan, May 3, 2023, New York Times
Data security company Cyberhaven recently performed an audit of its employees using OpenAI’s ChatGPT to determine if sensitive company data was being passed on to the chatbot service. Their audit revealed as much as 11% of the content pasted into ChatGPT contained sensitive company data. Cyberhaven, which offers data security software to a variety of companies, observed that a growing number of their clients had employees utilizing ChatGPT. “Despite some companies blocking ChatGPT, its use in the workplace is growing rapidly,” wrote Cyberhaven’s Cameron Coles.
A.I. developers have also implemented their own chatbots or partnered with A.I. companies to optimize online search engines with the technologies. Google uses an A.I. tool called Bard. Microsoft, through its search engine Bing, recently implemented ChatGPT. A report by Public Citizen explains that an “A.I.-generated answer means the search engine becomes less a tool for finding unique and original sources of information and more a tool for synthesizing those original sources into a secondary source.” Microsoft started incorporating ads into its Bing search chatbot which means it will likely drive more online traffic away from an original information source and channel the traffic more to the answer provided by the A.I. service.
Publishers have also sounded the alarm about chatbots and A.I.-generated search engines. Publishers rely on users finding their content through search engines and worry that A.I. tools will drive traffic away from their sites. A.I. search engine results also further threaten small to mid-sized businesses and their economic prospects. OpenAI states on its website that it is developing plug-ins that will allow its latest model of ChatGPT to perform automated actions online for customers such as booking flights, ordering groceries, and shopping.
As the report by Public Citizen notes, “A.I. tools as intermediaries is another way tech corporations can insert themselves into supply chains and charge commissions that raise prices for consumers, while siphoning money away from small and local businesses.”
A.I. potentially sets up large businesses for claims of monopolization and unfair business practices, some forecast.
“While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.” — FTC Chair Lina Khan, May 3, 2023, New York Times
What is more, this technology has been widely predicted to cause greater economic inequality than exists today.
Economic Inequality
The Big Five and their whopping 33,000 doctoral A.I. researchers clearly indicate their intentions – to generate more corporate wealth. According to a March 27, 2023 article written by the Washington Post, nearly 70% of A.I. Ph.Ds. opt to work for the corporate sector whereas, 20 years ago, that number was roughly 20%. This metric indicates that the vast majority of A.I. tools and technology being developed are not for academia or truly life-improving purposes, but for corporations to render them more machine-like and more easily generate and manipulate money.
OpenAI CEO Sam Altman wrote a manifesto in 2021 predicting that the widespread deployment of A.I. would lead to most people being worse off than they are today. Altman painted an ominous picture of the world to come, decrying how “in the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do practically everything, including making new scientific discoveries that will expand our concept of everything.”
Altman also argued that A.I. will “create phenomenal wealth,” and “if we get this right…can improve the standard of living for people more than we ever have before.” OpenAI conducted their own research into this topic in 2023 and published a paper indicating that approximately 80% of the U.S. workforce could have least 10% of their tasks affected by the introduction of GPTs (generative pre-trained transformers), while roughly 19% of the workforce could have as much as 50% of their tasks automated. The extent of unregulated image, text, and even voice manipulation by GPTs has the potential to create many copyright issues, especially for professional artists, musicians, and actors.
Copyright Infringement
Public Citizen reported that artists and writers have had the content they produced and published online used without their consent to train A.I. tools to produce derivative art. Cartoonist Sarah Anderson’s artwork was turned into neo-Nazi memes by far-right political activists. Voice actors have similarly been impacted by non-consensual use of their voices with the use of A.I. tech.
Vice News reported on Feb. 7, 2023, that voice actors were increasingly subjected to contracts containing language that gives away their rights to use of their A.I.-generated voices. (Demonstrating the high-wire act that online media is, Vice itself is reportedly headed to bankruptcy.)
Tim Friedlander, President and founder of the National Association of Voice Actors said clauses “are very prevalent right now” that sign rights to an actor’s voice over to publishers. “[M]any voice actors may have signed a contract without realizing language like this had been added. We are also finding clauses in contracts for non-synthetic voice jobs that give away the rights to use an actor’s voice for synthetic voice training or creation without any additional compensation or approval. Some actors are being told they cannot be hired without agreeing to these clauses.” Actor Emma Watson’s voice was recently used without her consent for a reading of Mein Kamph, according to Vice News.
U.K.-based Getty Images has launched a lawsuit in federal court in Delaware against Stability A.I., alleging that the company copied 12 million images without permission to train its A.I. tools. Stability A.I. has responded to the complaint, arguing that their use of the images falls under the Fair Use Act 17 U.S.C. § 107 and does not constitute copyright infringement. Legal analysts believe that Getty Images has a stronger case than an individual artist would have given the blatant use of millions of its images.
Proposed Public Solutions
Media attention surrounding A.I. tools and technology is accelerating. The Biden Administration acknowledged that policymaking was woefully lagging in mitigating potential harms stemming from the widespread deployment of A.I. In response, the Biden Administration published a “Blueprint for an A.I. Bill of Rights” in October 2022.
This blueprint is intended to serve as a broad guide for the federal government’s deployment of A.I. and model of best practices for society at-large.
There are five principles outlined in the guidance document:
1) Americans should be protected from unsafe or ineffective systems.
2) Americans should not face discrimination by algorithms.
3) Americans should be protected from abusive data practices and have agency over how data about them is used.
4) Americans should know when, how and why automated systems are being used to make decisions that affect them.
5) Americans should have the choice to opt out of automated customer service and have access to a person who can help troubleshoot problems.
Critics of the blueprint argue that while the White House did seek input from the Big Five’s lobbyists, the guide “is essentially a white paper with no enforcement authority against Big Tech.” Some in the corporate world have already contested that, saying ,“even [the] unenforceable guidelines could stifle innovation.”
U.S. Representative Ted Lieu (D-Calif.), who has a background in computer science, is advocating for the creation of a federal agency dedicated to regulating A.I. technologies. Lieu argues that Congress needs to implement the creation of this new agency to “ensure that the development and deployment of A.I. is done in a way that is safe, ethical, and respects the rights and privacy of all Americans, and that the benefits of A.I. are widely distributed and the risks are minimized.”
As the Big Five and other contributing A.I. companies continue their race to develop the newest and best A.I. technologies, people worldwide remain at risk for potential nonconsensual data collection, unwarranted marketing solicitation, and nonconsensual use of personal information, pictures, published text, and videos. This report highlights but a portion of the foreseeable legal issues arising in the unregulated Wild West of A.I. technology.
[Editor’s Note: Would it be petty to point out that in the photo of Robots Typing some appear not to have heads? But, then, maybe their brains and eyeballs are located elsewhere. 😊 ]
Tom Hagy edited and contributed to this article.
How A.I. Can Benefit Society
By ChatGPT
Artificial intelligence has the potential to benefit society in a multitude of ways. One of the most significant areas of impact is in automation, where AI can perform repetitive and tedious tasks more efficiently and accurately than humans. This can lead to cost savings and increased productivity, freeing up people’s time to focus on more creative and strategic endeavors. Additionally, AI can help in areas such as healthcare by improving diagnostics and developing personalized treatment plans. It can also be used in education to provide personalized learning experiences and in environmental conservation to monitor and protect wildlife and natural resources. AI can also assist in disaster response and relief efforts by analyzing data and predicting outcomes. Overall, artificial intelligence has the potential to improve many aspects of society, from healthcare to education to the environment, and beyond.
Can ChatGPT Really Pass the Bar?
By ChatGPT
As an AI language model, I do not have the ability to pass a bar exam or attend law school in the traditional sense. While I have been trained on a vast amount of legal information, including case law, statutes, and legal principles, I am not a licensed attorney and cannot provide legal advice or representation. However, I can assist in answering legal questions and providing information about legal topics. It is important to note that relying solely on information provided by an AI language model is not a substitute for seeking the advice of a licensed attorney.
What is This Article About?
By ChatGPT
The article discusses the race among America’s Big Five tech companies – Amazon, Apple, Facebook, Google, and Microsoft – to develop artificial intelligence (AI) technology that they claim will change the world. The author raises concerns about the legal issues that may arise from an unregulated AI race, including degradation of free speech and public discourse, increased monopolization, greater economic inequality, and copyright infringement. The article also discusses how AI has already influenced the democratic process by being used to spread misinformation and generate fake news stories. The article further argues that the almost exclusive development of AI technology by the Big Five could lead to monopolization and limitations on free enterprise.