Humanizing Artificial Intelligence

Making AI More Effective and Reducing Risks by Replicating Our Brains

Artificial intelligence is a powerful tool that can enhance our lives in many ways, but it has significant limitations and poses serious risks.

These limitations include bias and a lack of ethics. Among the risks are social manipulation, economic instability and, worst-case scenario, autonomous weapons run amok.

Carleton University cognitive science researcher Mary Kelly, who runs the Adaptive Neuromorphics, Intelligence, Memory and Unified Systems (ANIMUS) Lab, is addressing these types of challenges on several different fronts.

One of the reasons the architects of AI systems struggle with these issues, she says, is an inability to accurately replicate what happens inside the human brain.

A woman with a pink cardigan and black shirt poses for a professional photo in front of a building.
Carleton University cognitive science researcher Mary Kelly (Photo by Terence Ho)

Artificial neural networks are a type of AI technology — layers of interconnected nodes that resemble our brains. Think of these neural networks as the brains that control AI robots.

You can train a robot how to perform a surgery, for example, and once it has mastered this skill, you can teach it how to fill out pharmaceutical prescriptions. But if you then ask it to operate on a patient, it will have forgotten how to complete this task.

This problem is the result of a phenomenon known as “catastrophic interference,” the tendency of an artificial neural network to lose previously learned information when it picks up something new. This restricts a neural network’s ability to develop iteratively over time. People learn and grow by navigating the day-to-day world, but a neural network that learns something today forgets everything it learned yesterday.

“There’s no single silver bullet,” says Kelly, whose background includes machine learning, cognitive psychology and computational linguistics.

“But basically, we’re trying to help develop neural networks that can do multiple things at the same time.”

Solving ‘Catastrophic Interference’

Artificial neural networks experience catastrophic interference because they are essentially a single entity that gets retrained again and again. The neurons that have been taught how to perform a surgery, Kelly explains, are re-allocated to filling prescriptions.

One solution that she and students Eilene Tomkins-Flanagn and Maria Vorobeva are exploring is “functional specificity,” a neuroscience concept which holds that different parts of the brain specialize in different functions.

Another is an attempt to replicate “holographic memory systems,” which is rooted in the idea that neurons throughout human brains fire in concert with each other to encode our data-rich memories and thoughts.

A doctor in scrubs and a digital overlay on top.
Photo by ipopba / iStockPhoto

If these approaches are used to inform the design of artificial neural networks, the AI systems they power could become better equipped to handle the wide variety of tasks we throw at them in the years and decades ahead.

Kelly and her students write code for artificial neural networks, run simulations to test AI systems and compare the results to experiments that other researchers have conducted using human subjects.

Ultimately, their goal is to help build more human-like and more effective artificial neural networks, which, in turn, will deepen our understanding of how the brain works.

Reducing AI Risks

The ANIMUS Lab team is also exploring some of the risks around AI, including an issue rooted in the “cobra hacking” problem.

Venomous cobras were a concern in colonial India, so the British government put a bounty on the snakes, prompting citizens to start farming and killing cobras for the payoff.

The AI corollary, Kelly explains, is a robot paramedic that’s rewarded when it brings injured people to a hospital. What’s to stop it from hurting people so it has more patients?

Robotically controlled arms performing surgery.
Photo by ekkasit919 / iStockPhoto

That’s one of the questions being investigated by undergraduate research assistant Taran Allan-McKay, whose work is probing the moral decision making of AI agents and how choices made by these systems can be improved through attention to ethical and safety considerations.

PhD student Spencer Eckler, meanwhile, is looking into “causal reasoning,” which is defined as “the use of logic and facts to determine cause and effect relationships.” This process could deter a robot paramedic from intentionally causing car accidents because it understands the outcome.

The threat to democracy posed by disinformation and “corporate greed” are two of AI’s biggest dangers, according to Kelly.

“The prospect of a robot with a gun,” she adds, “really scares me.”

One way to mitigate these risks is a human-in-the-loop approach, in which people vet AI decisions to ensure they’re appropriate. Kelly is working on a National Research Council-supported project to explore how AI systems can make better decisions and also explain these decisions.

“There are a lot of fears around the future of AI and I share many of those fears,” she says, “but there are also a lot of potential benefits to society, especially if we can develop better systems.”

A person touching a digial touchscreen while holding a stethoscope.
Photo by ipopba / iStockPhoto

You might also like

Latest articles