A.I.-ay-yi-yi-yi

Good grief, we’re burning up massive amounts of electricity to create three-eyed Snoopys

Someone posted this A.I.-generated image on Facebook recently:

Great job, ChatGPT! I’d recognize the famous “Penoots” character “Snoppy” anywhere.

Take a good look at this abomination. Snoopy’s smile is sideways, and he appears to have either two noses, or a sideways eyeball. It’s hard to tell. A six-year-old child could draw a better picture of Snoopy, and also could tell you why this image is messed up.

But “artificial intelligence” (sic) can’t, because “artificial intelligence” (sic) isn’t actually “intelligent” in any human way. It doesn’t have the reasoning power of a six-year-old child.

Charles Schulz had a very simple but sophisticated drawing style that reduced mouths to lines and eyes to ovals. Nevertheless, a six-year-old can intuit that a vertical oval represents Snoopy’s eye and a horizontal oval represents Snoopy’s nose, because human brains are great at spotting faces.

Computers have to have that explained to them.

I’ve written about this before, but I think it bears repeating, especially as Pittsburgh’s elected officials are all eagerly touting “A.I.” (sic) as the Next Big Thing That is Going to Save Western Pennsylvania from its ongoing population decline.

(Previous things that were going to save us from population decline, during my lifetime, have included tourism, new professional sport stadiums, foodie culture, a bigger airport, more hotels, the Mon-Fayette Expressway, a bigger convention center, legalized gambling, and I’m sure I’m missing some things. During that time, Pittsburgh has slipped from the top 10 metropolitan areas in the United States to barely hanging on in the top 30.)

“The Fish that Saved Pittsburgh” also didn’t save Pittsburgh.

Behind the original concept of “artificial intelligence” was the idea that science could create computers that solved problems like a human solves problems. Beginning in the 1950s, computer scientists, psychologists, neurologists and others — many of them in Pittsburgh at Carnegie Tech, now Carnegie Mellon University — worked to develop models of the human brain, understand human reasoning, and create computer programs that could emulate the thinking process.

Their goal was a “thinking machine.” At times, they got close, but understanding human thought has proved very elusive, and we still don’t know exactly how the human brain acquires information, stores memories and synthesizes it all into new ideas.

At the same time, computer memory has gotten incredibly cheap. You can buy a multi-terabyte hard drive for under $100. You can buy a 128 gigabyte flash memory card at Walmart or Target for $20.

Computer processors have gotten cheaper and cheaper, too. A computer chip that cost the equivalent of $800 in 1970 now costs $1.58 today, but don’t bother buying it; it’s been obsolete for decades.

Meanwhile, an Intel processor with 12 cores, which is almost unimaginably more powerful than the primitive 1970 technology, costs under $200.

So beginning just after the turn of the 21st century, research into computer problem-solving machines moved from “trying to create a thinking machine” to “throwing lots of cheap memory and cheap computer chips at the problems” and solving them through brute force.

And it worked! With enough cheap computer power thrown at difficult problems, you could come up with solutions. It was an enormous achievement.

Most computer scientists, rightly, didn’t call this “artificial intelligence,” because they knew better. It wasn’t “intelligent.” They called them “large language models” or “statistical machine learning.”

But “statistical machine learning” sounds dull. Investors and marketing people have labeled it “artificial intelligence” because that’s way sexier.

And when they started pushing it as “artificial intelligence” that could replace human workers and make bigger profits for companies that use A.I., money from Wall Street followed.

Some day, I expect, probably sooner than we think, scientists will invent machines that can reason and think like a human, and those machines will be able to create art and music from scratch.

But they’re not there yet; right now, every “A.I.” still relies on sucking up vast amounts of data — whether it’s words or pictures or sounds — turning it into numbers, slicing and dicing and processing them, and then crunching them back out.

It’s still brute-force, and assigning an “A.I.” program or a chat-bot a personality, or assuming it has motives or feelings, strikes me as foolish and incredibly dangerous.

“A.I.” can’t create anything. It can only look at everything that came before it, and using probability and statistics, create a simulation from those inputs that looks like the previous things. It’s a copy of a copy. It’s not creativity.

I find A.I. is incredibly useful … for some things. I use A.I. software to generate transcripts of long audio files. I’ve also used A.I. to make summaries of long documents. In both cases, the results aren’t perfect, but they’re pretty darned good, and they are time-savers. The brute-force approach works really well at taking a big giant amount of data and smooshing it down into a smaller amount of data, just as a paper filter works really well at extracting delicious coffee from ground-up beans.

But I would never just pour random shit into the coffee pot and drink whatever comes out, and I would never use A.I. to generate something out of thin air.

Nor would it trust it to answer questions for me. When you ask an A.I. a question, the A.I. can’t know if the answers are actually correct, or even logical. It’s taking guesses, based on crunching massive amounts of data and simulating what it’s seen before; sometimes it’s guessing right, sometimes it’s guessing wrong. If garbage went in, garbage will come out.

As a for-instance, a colleague recently asked a chat-bot to examine his website for broken links. It spit out a page full of broken links and he asked if I’d help fix them. When I logged into the website, I found that the pages the chat-bot had decided were “broken” didn’t actually exist.

In fact, the URLs — the web addresses at the top of browser windows that show where web pages can be found — included directories and subdirectories that weren’t even on that website. No human being would have typed those addresses into Safari or Chrome, because a human would have instantly realized the URLs didn’t make sense. I couldn’t figure out why the chat-bot found broken pages that weren’t even there.

I just happened to be having lunch with a computer scientist friend that day, so I explained the problem. “Oh, that,” he said. “The A.I. made them up. You gave it a task — find broken links on the website — so it did its best to find some broken links on the website to answer your question.

“Basically, it hallucinated them,” he said.

Let’s set aside for a minute, if we can, that “A.I.” (sic) programs require theft and plagiarism. For example, to generate our “Snoppy” picture at the top of the page, the “A.I.” (sic) had to scrape thousands of images of “Peanuts” cartoons, turn them into numbers, chop them into bites of data, and determine what kinds of shapes and colors are typical of “Snoopy.” That’s copyright infringement. A lot of people get stuck at this very point — “A.I.” (sic) companies are profiting by stealing work done by humans. Eventually, if you take humans out of the creative process, there will be nothing for “A.I.” (sic) to steal except for products squeezed out by other “A.I.” (sic).

And let’s set aside for a minute that “A.I.” (sic) requires enormous, massive amounts of electrical power and massive amounts of water to cool down the computers. Remember, when you ask “A.I.” (sic) to generate a picture or write a memo for you, it’s solving that problem by firing up huge networks of computer chips and storage systems.

Since 2021, according to some estimates, electricity prices for residential customers have gone up 10 to 20 percent and prices for commercial customers are up about 30 percent. The rapid construction of power-hungry data-processing centers is being cited as a major cause of the increase in costs. And water levels are rapidly going down as water is diverted to cool those computer centers. It’s resource intense and contributes to climate change; this is another objection that many people have to “A.I.” (sic).

So “Artificial Intelligence” (sic) relies on stealing other people’s work, using massive amounts of electricity (creating more pollution) and using massive amounts of water … but is it at least producing something that’s useful? Is there at least a trade-off?

It’s not clear to me that there is. Right now, the amount of damage that “A.I.” (sic) is doing far outweighs any meager benefits.

To go back to the example of our picture at the top of the page:

First: Snoopy is probably the most popular cartoon character in the entire world after Mickey Mouse. Since 1950, Snoopy has been depicted in movies, TV shows, books and (of course) newspapers, and plastered on everything from blimps to T-shirts to lunchboxes. There is no need for a computer to be able to generate fake images of Snoopy. We don’t need it.

And second, despite all of the expense involving in running an A.I. data center, and all of the pollution it creates, A.I. can’t even generate a good image of Snoopy.

Does that sound like a good deal to you? It sounds lousy to me.

If you went to McDonald’s and every third hamburger they made was actually a giant, basketball-sized meatball instead, and it cost you $500, you’d be pretty pissed off.

“A.I.” (sic) is not a toy. It’s an insanely expensive giant meatball generator. It’s a powerful tool, but so is a power saw, and if you misuse either one of them, you can hurt yourself.

It’s also not “intelligent.” It’s dumb. That may change some day — I’d be willing to say it will change some day — but it hasn’t yet.

Anyone who tells you that “artificial intelligence is here, now, and we don’t need humans to create movies or images or write books any more, or even doctors or nurses to diagnose diseases, because computers can do that work!” is dumb enough to buy a picture of a three-eyed Snoopy and call it art.

And don’t let them pick up food at McDonald’s for you or use your power saw without strict supervision.

Discover more from Jay Thurber Show

Subscribe now to keep reading and get access to the full archive.

Continue reading