Skip to main content
A A A

Article

AI is everywhere. And, no, I do not mean in a “Skynet is coming,” or “Big Brother is here,” kind of way. I mean everywhere you look. Stories abound about the oddities that result from using generative AI. In one story, a car dealership’s chatbot offered a 2024 Chevy Tahoe for $1.00. In another, attorneys who were using Chat-GPT found that the generative AI tool was pulling facts out of thin air, creating false citations to cases that did not exist. 

The AI mentioned above is “generative AI.” AI generally falls into two categories, “analytical AI” sometimes also referred to as, “predictive AI.” Analytical AI has been around for a while, some even tracing it back to Alan Turing and the Enigma machine in World War II. Analytical AI typically involves an AI program, also called a model, which a programmer has trained on several large data sets to “learn” to associate between a particular input and its related output.

Generative AI involves an extra piece, called a transformer (GPT means Generative Pre-Trained Transformer). Google developed the first transformer in 2016-2017. Transformers started with natural language processing (NLP) and have become the backbone of the large language models (LLMs) in use by most AI applications such as Chat-GPT, Google Gemini (used to be Bard), BERT, DALL-E, as examples. In addition to generating text, image, and video content, these and other coding-related tools allow users to get the bot to produce computer code, such as IBM’s Watson Code Assist, Microsoft’s Co-Pilot, and others. AI could now not just analyze data; it can transform it to create things.

These tools provide amazing advantages for users. Recently at a conference of small business owners, several of them discussed the advantages for a small company of 5-10 people using these tools instead of having to hire more people. Some companies take the approach that no one should ever use Generative AI tools, ever. This certainly keeps everything safe but may end up causing these companies to fall behind. Other companies take the approach that everyone should be using it for everything. These companies may open themselves up to risks.

The risks fall into one or more of three areas: intellectual property (IP); data privacy; and bias and discrimination. Regarding IP, using generative AI to produce content by your company and by others to generate AI can lead to lack of or infringement of IP rights. By governing the use of AI, companies can take advantage of AI while mitigating the risks.

Initially, both the US Copyright Office and the US Patent and Trademark Office (USPTO) said that nothing generated using generative AI can be protected. Both of these agencies have backed off on this stance, the USPTO just last month. They now agree that this area requires some analysis rather than a hardline rule in determining whether people can protect this work. This may mitigate issues with the lack of rights. The other issue lies in infringement. Your company may unwittingly infringe other’s rights, and other companies may unwittingly infringe yours.

When you ask a generative AI tool to produce content, say an image, it will go out and find images close to what you want and may use parts of that image to produce your desired image. How much of other people’s content shows up in your image may result in the desired image becoming a derivative work of other people’s content, if not a direct knockoff. Similarly, if you have publicly available content that these tools can access, your images, your writings, and your work, may become part of their desired result, and you would never know. Initially, some of these tools would use the prompts users entered into them and the results as training data to further train the model. The more training data these models have, the more accurately they work.

The need for training data gives rise to another area of concern in data privacy. These models need astonishing amounts of data, and sometimes their training programmers may use data that violates privacy laws, not necessarily intentionally. Programmers may not review the data sources closely enough, some sources fall through the cracks, and some may use it regardless of privacy. Anonymizers and other tools may reduce the likelihood of confidential data being used for training, or even for operations, but someone needs to keep an eye on it.

Data also contributes to another area of concern. Most discussions term this as “bias and discrimination,” which names the result, not the cause. The causes of a model making decisions that have bias in them typically include “bad” data, “bad” programming, or user error. Bad data usually means incomplete data sets or data sets that have not undergone analysis to ensure they have good representation across the population upon which the model will operate. Underrepresentation of a particular group will cause inaccuracies in results for under-represented groups. The population may not necessarily be human, although that is the most common population in which this arises. Examples occur often in health care and recruiting and hiring. Amazon stopped using AI to sort resumes because the model looked for keywords that appeared more frequently on resumes of men than on resumes of women.

Bad programming results from programmers using factors in the weighting used by the models based upon their own conscious or unconscious biases. The other source of bias may come from user error in entering the query or interpreting the results.

For many companies, outside services and vendors oversee the programming of the models and their training. This makes it difficult for customer companies to control data bias or programming bias. It may also make it difficult for the users within a company to track IP and privacy issues. In some cases, the outside vendor may be using tools outside of the company, such as a recruiter using AI to sort resumes, or a loan processor using AI to identify the best loan candidates. The vendors may not have any detailed knowledge about the AI tools they are using.

However, companies can enact policies that ensure that their employees use these tools responsibly, that the tools they use come from credible vendors or providers, and that someone with knowledge of these issues reviews the outputs before using them. An AI governance policy can allow companies to show that their use of AI complies with relevant laws and regulations, does not put their customers’ data at risk, and the AI tools they use are reputable and fair.

Miller Nash’s technology professionals can assist in these situations and can answer your questions about artificial intelligence, copyright ownership, protection, and infringement. Give us a call.

Interested in more information? Below are other articles we’ve posted on issues in patent law and copyright law about trying to protect IP resulting from generative AI tools.

    This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.

      Edit this post