top of page
Search

Why I Won't Use AI

  • Writer: Anton Skaugset
    Anton Skaugset
  • Jan 29
  • 4 min read

I am one of those folks that is fairly rabidly opposed to what is currently referred to as "AI" (even though it is most assuredly not truly artificial intelligence). There are strong environmental and economic reasons to avoid AI use, and, as an intellectual property professional I cannot overlook the astonishing scope of IP theft that has to be committed in order to even create such models. But I will not preach upon those issues here.


Instead I want to give a more practical explanation to clients, or potential clients, why I don't use AI. Some clients may strongly believe that using AI could generate better results, faster, than not doing so, and thereby save them money. I respectfully disagree. Let me try to explain.


(I am not a computer scientist. I'm sure that I don't understand the technical details of precisely how AI works. But, chances are, neither do you. I may get some of the bits wrong, but please just try to follow the general reasoning.)


I don't like the term "AI," I prefer to call them LLMs, or Large Language Models, which is a much better descriptor for what they actually are. A Large Language Model is just software that has absorbed a truly vast amount of text, and when asked for a response, can generate the response based on a statistical analysis of the billions of examples of vocabulary, syntax, grammer, etc. in the data set that it was trained on. That is, it can look at the arrangement and order of words in your prompt, and then spit out another sequence of words that, statistically, would appear to be an appropriate response.


There is no thinking going on. This is just probability-based word selection.


The first reason I won't use LLMs is that the one or two times I have tried, I've immediately gotten incorrect information back. If you ask a question in a field that you already understand, you will see immediately that there are errors in the response. However, if you ask a question in a field that you know nothing about, you will never spot those errors, and the answer will sound correct. This is possibly the very worst aspect of using an LLM to try and learn anything.


The second reason is a little more complicated, and is based on the statistics of large data sets. This is a normal distribution:

Normal Distribution (or Bell Curve)
Normal Distribution (or Bell Curve)

If you plot the values of virtually any naturally-occurring variable for a population, such as height, weight, GPA, etc. etc. the resulting plot will look something like a normal distribution. If the plot is a plot of height, the tailing edge to the left would be those individuals that are extremely short, and the tailing edge to the far right would be those individuals that are extremely tall. The middle of the curve, where it is highest, represents the average height.


Now let's say the distribution represents the quality of legal writing, specifically patent drafting, that you could find in a very large set of documents. Say you were, for example, scraping the entire Internet for data. You would get a distribution that might look like this:

On the far right you have the work product of brilliant patent practitioners. On the far left, the output of utterly unqualified hacks. In the middle, average quality work.


I have been doing patent prosecution for a very long time. I believe I've gotten pretty good at it. If you plotted the quality of my work on such a curve I would like to believe it would fall to the right of average, possibly even well to the right of average.


But an LLM considers the entirety of the data set it was trained on. That's the entire area under the normal distribution curve. An LLM has no capacity for differentiating "good" examples from "bad" examples. It will define patterns, vocabulary, and syntax based on the summed output of everyone from awesome IP attorneys to the folks spewing absolute nonsense. And believe me, folks spewing nonsense dramatically outnumber the expert patent practitioners.


So when I say I won't use AI, what I'm really saying is "Why would I want to create a work product that is most likely going to be inferior to what I could just write myself?" And if your answer is that I could always edit and revise what I get, I will point out that the consequence of an error in my line of work ranges from mildly embarrassing to utterly catastrophic for my client. To properly do my job, I would have to scrutinize and edit LLM output extremely carefully so that I would know I was not submitting something improper over my signature. That's a lot of time and a lot of work and it really is far easier, and much better for my peace of mind, to just draft it myself.


And in any event, I will have the comfort of knowing that any errors I create are absolutely my own.

 
 
 

Recent Posts

See All
Privacy and Encryption

I use Protonmail for email communication, because their emphasis is on security and confidentiality. When two Protonmail users exchange emails, the email is automatically encrypted end-to-end, and hig

 
 
 

Comments


© 2026 by Raven Patents, LLC.  Raven Patents® is a registered service mark of Raven Patents, LLC.

Proudly created with Wix.com

A small image of the Progress Pride flag. Raven Patents does not discriminate on the basis of race, ethnicity, sex, sexual orientation, gender identity, or gender expression.
bottom of page