Lucas Ropek | Gizmodo
Profits Before People
Well folks, the AI “revolution” is upon us, and it already seems to be getting out slightly out of hand. Big corporations, spurred by the specter of easy profits, have begun to roll out new tools and products that use artificial intelligence to enhance user experience. Search engines, Hollywood, and media industries all seem to want to jump onboard. Still, this totally unregulated field also has the potential to be hugely disruptive to existing industries, our way of life, art, judicial systems, even your brain. Here are some of the warning signs that the robot revolution could get very, very messy.
AI vs Human Coding
ChatGPT is apparently smart enough to get hired by Google. NBC recently reported that the tech giant had tested ChatGPT for its coding prowess in the same way that new coder recruits typically are and that it passed with flying colors. “Amazingly ChatGPT gets hired at L3 [level 3] when interviewed for a coding position,” an internal note, divulged by the news outlet, reads. PC Magazine notes that such a position comes with an average annual salary of $183k. Others have similarly tested the chatbot and come away concerned for human coders’ job security.
Automating our justice system sounds like one of the worst ideas I’ve ever heard. Still, folks are out here saying it. In a recent disturbing development, a judge in Colombia decided to use ChatGPT to render a judicial decision concerning an autistic child’s medical insurance. Judge Juan Manuel Padilla Garcia apparently thought it would be a good idea to use the chatbot to save time during the proceedings. “The arguments for this decision will be determined in line with the use of artificial intelligence (AI),” Garcia wrote in his decision. “What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI.”
Multiple new programs and services are now advertising that AI can be used to write screenplays and automate the filmmaking process. For instance, the AI startup Deepmind recently announced the launch of a tool called Dramatron, what it calls a “co-writing” tool. According to its website, Dramatron is supposed to help screenwriters by using “hierarchical story generation for consistency across the generated text. Starting from a log line, Dramatron interactively generates character descriptions, plot points, location descriptions and dialogue…”
Ugh. This offends me on so many different levels that I can’t really even begin to unpack them. I mean, why even write the script at all? Just have a robot write the script and maybe the robot can also watch the finished movie, too, since there’s no point in human endeavor anymore. Suffice it to say, I hope these programs get terminally infected with some sort of malware.
A recent story from Motherboard shows that the entertainment industry is also trying to automate voice acting. The outlet reports that voice over actors are being asked to sign away the rights to their own voices so that AI programs can be used to create “synthetic” versions of their voices. The outlet reports these new “contractual obligations are just one of the many concerns actors have about the rise of voice-generating artificial intelligence, which they say threaten to push entire segments of the industry out of work.”
Again, why? What is the point of doing this? The benefits of the technology seem minimal to me. Is it that a company can save money by generating huge swaths of bargain-bin voice acting? Let’s just leave that for robots to listen to.
What’s more dystopian than a computer program delivering your news for you, news you might have written yourself in a past job? Controversy exploded last month when it came to light that big time tech publication CNET had been quietly publishing droves of financial explainers using their own in-house AI program. Not only were the articles filled with factual inaccuracies, but some had the whiff of plagiarism as well. Maybe it’s not a super smart idea to automate an industry that relies on factual accuracy? Just a thought.
Of course, like any flashy new bauble that smells of money, AI is now exciting Wall Street. The Morning Brief writes that the “stock market hype machine” has now descended upon anything with “AI” affixed to its name, meaning that any company whose product can claim marginal relevance to automation is now primed to have a big NASDAQ glowup. As the Brief notes, it’s basically web3 all over again.
You know what’s a really horrifying thought? Pairing surveillance systems with artificial intelligence to create super smart spying apparatuses. Yeah, that doesn’t sound very good. Unfortunately, Wired recently reported that, in Russia, AI is being integrated into massive networks of security cameras to create cities where “there is no place to hide.” Charming! Can’t wait for the U.S. edition.
The AI “arms race” commences. Silicon Valley is looking to capitalize on AI’s big moment, and every tech Goliath worth its salt is feverishly looking to churn out a new product to keep pace with ChatGPT’s 100 million users. Microsoft kicked things off nicely earlier this month with its integration of ChatGPT into Bing, with Microsoft CEO Satya Nadella proclaiming, “The race starts today.” The OG tech giant says it wants to use the chatbot to “empower people to unlock the joy of discovery,” whatever that means. Not to be outdone, Google announced that it would be launching its own AI search integration, dubbed “Bard” (Google’s tool already made a mistake upon launch, costing the company a stock slump). In China, meanwhile, the tech giants Alibaba (basically the Chinese version of Amazon) and Baidu (Chinese Google) recently announced that they would also be pursuing their own respective AI tools.
Do the people actually want an AI “revolution”? It’s not totally clear but whether they want it or not, it’s pretty clear that the tech industry is going to give it to them. The robots are coming. Prep accordingly!