°®¶¹´«Ã½

December 28, 2024
overcast clouds Clouds 37 °F

Work in the age of artificial intelligence

Whose jobs are at risk? The answer is more complicated than you might think

°®¶¹´«Ã½ researchers are examining AI from a variety of angles — how to improve it, the best ways to implement it and what we’re getting out of it. °®¶¹´«Ã½ researchers are examining AI from a variety of angles — how to improve it, the best ways to implement it and what we’re getting out of it.
°®¶¹´«Ã½ researchers are examining AI from a variety of angles — how to improve it, the best ways to implement it and what we’re getting out of it. Image Credit: iStock.com/gremlin.

For years, workplaces have relied on a certain level of artificial intelligence to perform specific tasks, such as analyzing data, predicting patterns or automating routine processes.

The rise of generative AI, which can create new content, has accelerated both business investments and interest from society at large. Rather than just sorting preexisting information, OpenAI’s ChatGPT, Google’s DeepMind and other contenders can generate new text, images and video based on written prompts.

Many corporations, especially tech giants, see AI as a path to increased efficiency and higher profits. Critics point to concerns about fake results (known as “hallucinationsâ€), copyright infringement (due to large-scale data scraping of text and images that are recombined into “new†results), and how workers will cope in this new environment. If AI can produce something as good as what humans do, how many of us will end up unemployed?

°®¶¹´«Ã½ researchers are examining AI from a variety of angles — how to improve it, the best ways to implement it and what we’re getting out of it.

A new landscape for this AI ‘boom’

Carlos Gershenson-Garcia, a SUNY Empire Innovation Professor, has studied AI, artificial life and complex systems for the past two decades.

When surveying the current “AI boom,†he steps back for a moment and offers some historical perspective: “There always has been this tendency to think that breakthroughs are closer than they really are. People get disappointed and research funding stops, then it takes a decade to start up again. That creates what are called ‘AI winters.’â€

He points to frustrations with machine translation and early artificial neural networks in the 1960s, and the failure of so-called “expert systems†— meant to emulate the decision-making ability of human experts — to deliver on promised advances in the 1990s.

“The big difference is that today the largest companies are IT companies, when in the ’60s and ’90s they were oil companies or banks, and then car companies. All of it was still industrial,†said Gershenson-Garcia, a faculty member in the School of Systems Science and Industrial Engineering, part of the Thomas J. Watson College of Engineering and Applied Science. “Today, all the richest companies are processing information.â€

With breakthroughs in large language models such as ChatGPT, some futurists have speculated that AI can do the work of secretaries or law clerks, but Gershenson-Garcia sees that prediction as premature.

“In some cases, because this technology will simplify processes, you will be able to do the same thing with fewer people assisted by computers,†he said. “There will be very few cases where you will be able to take the humans out of the loop. There will be many more cases where you cannot get rid of any humans in the loop.â€

As for generating images and doing design work, Gershenson-Garcia compares AI to the rise of photography in the mid-19th century. For centuries, painters would try to capture a true likeness of the subject. Once photos could do that, it freed 20th-century artists to explore more radical ideas, such as Impressionism or Cubism, and photography evolved into an art form of its own.

“I don’t think it will be the end of art, but more an exploration of art in areas that technology still cannot reproduce properly,†he said. “On the other hand, there also will be new art in collaboration with computers. It will be the same in other disciplines — science, gaming, entertainment, medicine. I think it will be interesting.â€

Re-evaluating the creative process

What would a good working relationship with AI look like? Christopher Swift, an assistant professor in °®¶¹´«Ã½â€™s Department of Art and Design, focuses his research on human and nonhuman collaborations in the creative process. His latest project, “Speculative Anthropology of the Unknown and Maybe,†explores creating with machine learning models as a new collaborative process that decenters the graphic designer as the primary maker.

“Very often, creative people see themselves as a unique, fantastic idea machine, as opposed to being part of this incredible network of collaborators, the history of culture and the tools we use,†he said. “Very often, we’re not as central or at the top of the hierarchy as we think we are. My work encourages people to look at this wider ecology where we exist and be a little bit more humble.â€

Unlike some artists and writers, Swift is not focused on copyright issues, since even humans “don’t come up with ideas from nothing.†He also disagrees that something generated by AI has less value than a piece made by human hands: “The idea that the text given to an image generator or large language model can’t produce something I would call ‘creative,’ that captures my imagination and makes me think in a different way, is a misunderstanding of the creative process and our role in it.â€

Swift points out that AI and robots have been taking jobs from humans for the past few decades, mainly in the manufacturing and warehouse sectors. Only now that creative roles are threatened — such as writing, editing, photography and design — are white-collar professionals concerned about their future employment.

“Most of the critiques I’ve heard about AI and how it’s going to affect the workplace are not about AI — they’re critiques of capitalism in general,†he said. “Yes, it’s going to take away people’s jobs, and we have nothing in place to ameliorate that. It is going to devastate entire industries — that is 100% true. But it’s a mistake to say that is because of AI, as opposed to saying this is what corporations do with any new technology.â€

Cutting jobs vs. optimizing the workforce

Surinder Kahai, an associate professor in the School of Management, agrees that how business leaders implement AI is at least as important as what it can do, if not more so.

Over his 33-year career at °®¶¹´«Ã½, Kahai has focused on the intersection of leadership and technology through the lens of management information systems (MIS). During that time, workplaces have evolved from local area networks (LANs) all in one room or building to employees working remotely from all over the world using the internet and supported by powerful computing platforms, many of which rely on AI.

Kahai said managers see two choices regarding AI: Cut jobs to boost the bottom line in the short term, or optimize AI as a tool that can improve productivity and quality in the long term.

Who will be at risk? The answer is more complicated than you might think.

“Companies may believe they do not need as many higher-skilled people,†he said. “Very often, we think that AI will affect lower-skilled people, but lower-skilled people cost less. If you can make them more effective and move them up the learning curve more quickly, then why hire higher-skilled people?

This way, you save money.

“The downside is that AI systems distribute the knowledge of higher-skilled people to lower-skilled people. If the work situation does not change, then the knowledge you have harvested from higher-skilled people can be used and reused for eternity. If the world and the business situation change, you still need those higher-skilled people — but maybe you need fewer of them.â€

While generative AI raises some ethical concerns — especially when it lifts content from copyrighted sources or presents “hallucinations†as facts — it also can be a tool for roleplay situations to develop ourselves as leaders.

“You can go to ChatGPT and say: ‘Pretend you are an employee who has proven to be difficult,’ then give it a scenario and ask it to engage with you,†Kahai said. “You can practice how to be a good leader in such a situation, and then you can ask it to evaluate you. It can do that quite effectively.â€

When humans and robots work together

If humans and AI are going to get along well, they need a common language, or must at least share common ground about problem-solving.

Shiqi Zhang, an associate professor at Watson College’s School of Computing, studies the intersection of AI and robotics, and he especially wants to ensure that service robots work smoothly alongside humans in collaborative environments.

There’s just one problem — and it’s a big one: “Robots and humans don’t work well with each other right now,†he said. “They don’t trust each other. Humans don’t know what robots can do, and robots have no idea about the role of humans.â€

Zhang and his team focus on everyday scenarios — such as homes, hospitals, airports and shopping centers — with three primary themes: robot decision- making, human–robot interaction and robot task-motion planning. Zhang uses language and graphics to show how the AI makes decisions and why humans should trust those decisions.

“AI’s robot system is not transparent,†he said. “When the robot is trying to do something, humans have no idea how it makes the decision. Sometimes humans are too optimistic about robots, and sometimes it’s the other way round — so one way or the other, it’s not a good ecosystem for a human–robot team.â€

One question for software and hardware designers improving AI–human collaborations is how much information needs to be shared back and forth to optimize productivity. There should be enough so that humans can make informed decisions, but not so much that they are overwhelmed with unnecessary information.

Zhang is experimenting with augmented reality (AR), which allows users to perceive the real world overlaid with computer-generated information. Unlike the entirely computer-generated experience of virtual reality (VR), someone on a factory floor stacked with boxes and crates could pull out a tablet or put on a pair of AR-enhanced glasses to learn where the robots are, so that accidents can be avoided.

“Because these robots are closely working with people, safety becomes a huge issue,†Zhang said. “How do we make sure the robot is close enough to provide services but keeping its distance to follow social norms and be safe? There is no standard way to enable this kind of communication. Humans talk to each other in natural language, and we use gestures and nonverbal cues, but how do we get robots to understand?â€

When it comes to AI, specific is best

If your workplace falls under the science or research realm, or if you do anything that involves combing through large amounts of data, AI can be a valuable tool for sorting everything at lightning speed. That is, if the algorithm is designed correctly.

Alexey Kolmogorov, a professor of physics, has been developing the Module for Ab Initio Structure Evolution (MAISE) simulation package for 15 years. At the intersection of physics, materials science and computer science, MAISE uses an evolutionary algorithm for finding stable crystal structures and a neural network module for modeling interatomic interactions.

Kolmogorov recalls that using AI for materials research hit a wall in the early 2000s because it proved difficult to translate information about atomic structure into something that the learning machine would understand. Later in the decade, the materials modeling community figured out how to parse structural information and feed it to neural networks. Those AIs, inspired by the human brain, offered the flexibility to construct general interaction models automatically with little human input.

“Whenever we come up with a machine-learning prediction, we still check it,†he said. “Once you narrow down the pool of possible candidates, now it becomes feasible to test it with the best possible available methods. In my group, we published papers that I believe to be the first examples where neural network potentials were used to predict compounds that are truly stable.â€

Exploring the chemical world guided by machine-learning models has the potential to change the way we discover new materials.

“Neural networks developed with MAISE accelerated the traditional structure search process a hundred fold,†Kolmogorov said. “This enabled us to screen over 3 million compounds in a year and identify dozens of previously overlooked materials.â€

While he is enthusiastic about the possibilities for accelerated exploration using AI designed for specific purposes like chemistry, accounting or healthcare, Kolmogorov remains doubtful of general AI models.

“It is incredible to see how far machine learning has advanced since I first used it in my Ph.D. research over 25 years ago,†he said, “but major breakthroughs are needed to make artificial general intelligence a reality.â€

AI merely adding ‘more noise and detail’

Stephanie Tulk Jesso, an assistant professor at Watson College’s SSIE School, shares those doubts. She researches human–AI interaction and more general ideas of human-centered design — in short, asking people what they want from a product, rather than just forcing them to use something unsuitable for the task.

“I’ve never seen any successful approaches to incorporating AI to make any work better for anyone ever,†she said. “Granted, I haven’t seen everything under the sun — but in my own experience, AI just means having to dig through more noise and detail. It’s not adding anything of real value.â€

Tulk Jesso believes there are many problems with greater reliance on AI in the workplace. One is that many tech experts are overselling — AI should be a tool, rather than a replacement for human employees. Another is how it’s often designed without understanding the job it’s meant to do, making it harder for employees rather than easier.

Lawsuits about copyrighted materials “scraped†and repurposed from the internet remain unresolved, and environmentalists have climate concerns about how much energy generative AI requires to run. Among the ethical concerns are “digital sweatshops†in developing countries where workers train AI models while enduring harsh conditions and low pay.

Tulk Jesso also sees AI as too unreliable for important tasks. Earlier this year, for instance, Google’s AI suggested adding glue to pizza to help the cheese stick better, as well as eating a small rock daily as part of a healthy diet.

Fundamentally, she said, we just don’t know enough about AI and how it works: “Steel is a design material. We test steel in a laboratory. We know the tensile strength and all kinds of details about that material. AI should be the same thing, but if we’re putting it into something based on a lot of assumptions, we’re not setting ourselves up for great success.â€

Despite AI’s limitations, corporations worried about keeping pace with competitors — and, of course, making a profit — are ramping up AI integration, regardless of whether it’s shown to have any great benefit. Because technology moves faster than legislation, it’s also unclear how AI should be regulated.

“There needs to be some kind of enforcer. I don’t know if that’s coming from lawmakers right now, and I don’t know if it ever can be codified into laws,†Tulk Jesso said. “We may need to rely on social laws — the way that we say, ‘No, you’re not putting that into my workspace, and if you do that, I’m going to quit, or I’m going to unionize and I’m going to fight this. I need to have some way to control my own environment.’â€