Will AI take over the world and all our jobs? | MR Online

Date:

MANY of us have used AI tools like ChatGPT, DALL-E and Mid Journey and have been blown away by what those tools can do. Interacting with ChatGPT feels like talking to a real person who can answer almost any question we can think of, explain complex topics in simple words, summarise articles and even write poetry, Shakespeare-like prose, essays and homework assignments. DALL-E and Mid Journey are capable of producing spectacular images from text prompts making artists out of people with no artistic skills. News articles have excitedly announced that AI tools have passed law, maths and medical exams. Abstracts written by ChatGPT for medical research journals have fooled scientists into believing humans wrote them. Using these tools and the relentless media hype around AI has created a general perception that we are in the midst of a technological revolution like no other which will transform our lives.

The general belief is that we already have or are on the cusp of creating machines that possess human-like intelligence which will be able to accomplish most tasks that humans can do, making a lot of us redundant and taking away a whole range of employment opportunities. Tons of articles have appeared announcing the impending demise of millions of factory jobs to automation and also a whole range of professionals, from journalists, writers, and content creators to lawyers, teachers, software programmers and doctors.

As spectacular as these tools are, are they really “intelligent”? Does ChatGPT actually understand the questions or interactions we are having with it and does it understand the subject matter when it responds? Do these tools think like artists when they produce amazing images or poetry? Many people including AI experts have claimed that these tools show “sparks of artificial general intelligence” or human-like intelligence—the Holy grail of AI research. Fantastic claims are made about reaching the point of “AI Singularity” of machines equalling and then surpassing human intelligence. However, we shouldn’t be fooled by such claims. These AI tools don’t actually possess anything close to human intelligence nor is the current trend of AI research likely to produce such intelligent systems anytime in the foreseeable future. When they generate text, images or even code they have no understanding or mastery of the subject matter. This category of tools called generative AI works by ingesting huge amounts of data, pretty much the entire text and image data publicly available on the internet. They then learn statistical patterns within the data, i.e., which word or phrase is likely to come after a word or part of a sentence. The text or images they produce is a remix of the text that is fed into them using these learned statistical patterns. They are effectively rehashing text and images that they have seen to fool us into believing that they understand or have mastery over a subject.

The above is not to say that these tools are not useful or impressive. They can be of immense value to us in assisting our work. They are just not “intelligent” and relying on them entirely can lead to incorrect and at times disastrous results. As an example, the autocomplete feature on our mobile phones is quite useful while typing on the phone but relying completely on the autocomplete suggestions without proofreading would lead to typing errors and messaging things that are not intended by the user. We use the feature as an assist without completely relying on it or thinking of it as truly intelligent. Most of the generative AI tools should be thought of and used along these lines. ChatGPT is useful in summarising or rephrasing articles or passages and even simplifying text written in obscure and complex manner but it does not give authoritative answers on any topic or properly replace humans in writing news articles or scholarly papers. Similarly, code generators such as copilot are very useful tools but no serious programmer would take the generated code as is without carefully reviewing it. The problem is given the hype and the cost-cutting culture in today’s business world these tools will and in many cases are being used to replace humans entirely or with little supervision leading to poor quality and at times even outrightly incorrect output. On the other hand, we need to guard against generative AI being used to produce things like deepfake videos which can be used to spread misinformation and manipulate people.

There are also ethical concerns around AI-generated art, literature and movies. While the AI produces seemingly novel pieces of art, it, in effect, learns from the artistic style of existing images and artwork and then can reproduce the style without copying the content, thereby bypassing charges of plagiarism. But in reality, such art amounts to high-tech plagiarism as the models are incapable of creativity but generate content based on learned statistical patterns.

The kind of AI programs we have discussed so far fall in the category of generative AI. There are other kinds of AI that are used not to generate content but to make decisions or predictions. These kinds of AI programs look at past data and develop statistical correlations in order to predict future outcomes or identify and classify things. These have already been deployed in many businesses, government institutions and services the world over. We already have numerous examples of AI decision systems, denying people legitimate insurance claims, medical and hospitalisation benefits and state welfare benefits. AI systems in the U.S. have been implicated in imprisoning minorities to longer prison terms. There even have been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations which often boil down to them not having enough money to properly feed and take care of their children. People have been denied hospitalisation or proper medical care based on these AI systems. These systems are almost always problematic and must be opposed. The science behind them is dubious as the data fed to train these systems is often not scientifically controlled and the statistical correlations these systems develop to make their decisions are almost never publicly disclosed making them obscure and arbitrary. They are often deployed by businesses and governments to cut costs and deny legitimate benefits to customers and citizens.

AI algorithms are used by social media companies to determine which content to display on our social media feed and also to perform content moderation, i.e., which posts or images violate policies and must be taken down. These don’t work very well in terms of actually detecting inappropriate or hate speech but instead, target words or topics which are politically inconvenient for authorities resulting in the outright censorship in the name of content moderation by taking down posts that are critical of the ruling party and its leaders and of western backed attacks on people and countries such as the Israeli genocide in Gaza. We know very well about the havoc social media algorithms have caused through the propagation of fake news and the creation of hate filled filter bubbles.

What about AI robots replacing industrial workers? Doing physical things in the real world requires dealing with complexity, non-uniformity, disorder and unanticipated situations. It also requires practice in actually doing those things and not just reading about them. It is for this reason that progress has been exceedingly slow in automating labour in factories. Robots can only handle fixed repetitive tasks involving identical rigid objects such as in certain automobile assembly lines. Even after years of hype about driverless cars and huge amounts of funding for its research, fully automated driving still doesn’t appear feasible in the near future. It is perhaps ironic that instead of blue-collar jobs getting replaced as was widely expected, specific white-collar jobs such as transcribing text from speech, language translation or call centre jobs are most likely to be replaced by AI.

All this is not to say that all AI is bad or useless. Many AI programs certainly are useful and can do wonders if deployed properly and used as assistance tools knowing its limitations. However, the self-serving hype generated around AI by the AI industry itself and the media is way out of proportion. AI certainly will not solve all our problems. Neither will it take away all our jobs and science fiction like scenarios of a super-intelligent AI going rogue and destroying or enslaving humankind are certainly not happening anytime soon. What is far more likely is that AI programs and services will be deployed without adequate testing and supervision which don’t perform as advertised or cause harm and hardship to people by denying them access to services they are entitled to such as welfare benefits, hospital services and insurance claims. We need to guard against the deployment of such obscure systems and push our governments to regulate the deployment of such systems.


Monthly Review does not necessarily adhere to all of the views conveyed in articles republished at MR Online. Our goal is to share a variety of left perspectives that we think our readers will find interesting or useful. —Eds.

Share post:

Popular

More like this
Related

📅 Supercopa final to be completed… nine months after it started 😳

The Supercopa final in Chile will be concluded on...

LIV Golf releases second part of 2025 schedule, including three U.S. events

With less than three months until the fourth season...

Why Roma picked Ranieri over Allegri, Montella and Lampard

The former Cagliari boss met Dan and Ryan Friedkin...

Ohio State men’s and women’s basketball both surging to start season

Let’s start with the Ohio State men’s basketball team....