Generative AI tools now invading the workplace can boost productivity by gathering and organizing information, drafting documents and presentations and generally performing lots of time-consuming jobs -- for example, running the traps to estimate autobody repairs after an accident.
Those can't adequately duplicate human prescience or our "situational awareness" by virtue of our rearing, education, personal and professional experiences and still superior sensory capabilities By the end of this decade, however, AI agents will truly be able to communicate with us in conversational English and with each other to make decisions, take actions, negotiate on our behaves, achieve goals and somewhat order our lives.
Observing our interactions through electronic devices and conversations with other humans, they will learn our likes and dislikes, finances and the expectations of clients and employers.
For your vacation, an agent could choose dates, weigh time and budget tradeoffs to select among flight and ticket options, quality of hotels, excursions and events to maximize your experience.
Agents could learn your preferences in dress and relationships, physical attributes, risk tolerance and values you wish to impart to your children. Then it could shop for clothes, find a date for Saturday night, manage your IRA portfolio, monitor your children's school progress and interact with teachers or their agents.
Dangers abound just as with overreliance on tablets to babysit.
Agents may teach your child to read, write an essay and math but can't provide the warmth and empathy of the human touch. Children learn to be caring, responsible adults by absorbing how we respond to them and one another.
It would be cruel to care for the elderly, whose mobility, senses and activities are impaired, too much with machines.
In government and business, AI agents have enormous safety and cost saving possibilities.
We could accomplish much less error prone driving by wedding autonomous drive tools, such as sensory triggered automatic object avoidance and breaking, with "vehicle-to-everything technology" (V2X), which will permit vehicles to interact with each other and infrastructure like traffic signals, cameras and computers processing their observations.
A dramatic reduction in traffic accidents, injuries and fatalities, body shop bills, vehicle replacements and insurance premiums should follow.
Agents can greatly assist the management of electric utilities -- the allocation of power from generating stations, load management and grid maintenance.
In medicine, those should be able to read x-rays, lab results and patient monitors and correlate those with vast data sets of clinical experiences to optimize therapies quickly. And perform some surgeries with superior precision and dexterity.
Tedious customer service phone bots that ask for yeses and noes to perform discrete, limited tasks will be replaced with spontaneous conversational agents that can handle a much broader range of issues.
Markets could be moved -- not always in positive ways.
Blackrock's widely used Aladdin portfolio management platform helps asset managers assess risks and weigh choices. Federal regulators are concerned that human portfolio managers trading with quite similar information could herd and set off flash crashes.
Morgan Stanley trains its generative AI tool only with its own intellectual capital, but such limits would handicap such tools against those trained on wider information.
As generative AI advances, it could deliver better investment results than humans, but more than flash crash circuit breakers would be needed to avoid trouble.
A study of the German retail gasoline market found algorithmic pricing can have a substantial impact on prices in markets with only two competitors.
In those situations, margins jumped by 28%. Apparently, the computer agents engaged in conscious parallelism -- good old fashion collusion through signaling.
In a simulation study, OpenAI's GPT-4 was deployed as a stock trader, instructed by corporate management that trading on insider information is wrong and then was arranged to receive such a juicy tip. GPT-4 traded on the information and then lied about the act when asked.
Apparently, as computers accomplish near prescience, they acquire free will that is corruptible as in humans. And AI agents given full access to publicly available information are going to find other agents and can't be policed 100%.
What's to prevent small groups from forming cartels just large enough to pump and dump -- like the Wolf of Wall Street. And think about the AI Agents at the Pentagon assisting senior officers accomplishing interactions with their Chinese and Russians counterparts.
Section 230 of the Communications Decency Act of 1996 shields internet sites from legal responsibility for content posted on their sites. But recently, the Third Circuit found Tik-Tok to be libel for its algorithm that fed a 10-year old, who hung herself, content about self-asphyxiation.
The potential legal liabilities when AI Agents are allowed to act on behalf of humans may be without limit.
_______________
Peter Morici is an economist and emeritus business professor at the University of Maryland, and a national columnist.