Skip to content
Curtin University of Technology
Curtin Insight

A.I.: No longer alone

by Terence Tan

You live alone on an island. Nothing happens on the island without you knowing. Every creature, plant, rock and river is known to you. It is your island. It is your world.

Traditionally, Artificial Intelligence (A.I.) researchers have worked towards creating machines to perceive the world, and understand and act on it. You can visit an automotive factory and be utterly amazed at the speed and precision of a robot assembling a car. However, if you were to take that robot and put it in a playground and ask it to play blocks with children, it can't. The machine can only function in an environment it can sense and control.

One day, you walk towards the river. You look forward to pulling up your fish traps and enjoying some fresh fish for dinner. As you gaze up to the tree canopies, you notice the birds are not flying in their normal patterns. A deer suddenly crosses your path. With a smile, you motion for it to come and enjoy fruits you have in your palm. The deer prances off hurriedly.

Feeling unease, you reach the riverbank. The traps are on the riverbank, and are all empty. Someone has pulled them up, opened them, and took out your catch. There is someone else on the island. Although the riverbank is dry, you can still make out the tracks of the thief. You decide to follow the tracks and confront him.

When there is only one intelligent machine or agent in the environment, it is easier for an agent to learn. The agent performs an action (pushes a rock) and something happens (the rock moves). The agent then concludes its action causes a change in the environment (“If I push a rock this hard, it will move this far”).

But when there are two agents, the conclusion may be flawed because the second agent could be pulling on the rock causing it to move further, or pushing against the rock causing it to move less. In what other ways can the new agent affect a change? Is the new agent deliberately impeding the first agent? Knowing the answers will help the first agent plan its next move. 

The tracks lead to a cave. As you head towards the entrance, a fist-sized rock strikes the ground between your feet. A boy walks out from the shadows, his right hand gripping another rock. He shouts at you but you do not understand him. You lift up your open palms to show that you are unarmed. He gestures for you to leave.

Slowly, you insert your hands into your bag and offer the contents. The boy longingly looks at the offer in your hand. Smiling, you motion for the boy to take it. The boy warily walks over and snatches the fruits in your hand. He hungrily gobbles the fruit. He ignores you as you enter the cave. On the floor lies the stolen fish and a failed attempt to build a fire.

When two strangers meet, a smile or a frown easily communicates intent without speaking. However, a smile is not universally understood (try smiling at a rampaging bull). Therefore, any form of communication requires a common understanding, also known as a communication protocol.

For instance, strict rules dictate how one should speak to the Queen of England. When she says this, you say that. If she does this, you do that. You address her as "Your Majesty", not Liz. Similarly, when a computer connects to the Internet, it follows a strict communication protocol, known as the Transmission Control Protocol and Internet Protocol (TCP/IP).

In financial markets, humans are increasingly using software agents to execute buy or sell orders. The agents communicate via protocols that are designed to be fast, clear and reliable. In multi agent cases, as is true in all other cases, business starts with good communication.

You walk home with a grin. The boy agreed to set and pull the traps in exchange for fruits and a fire. The fresh fish for dinner tasted better with another human being's company, even though you couldn't understand a word of his ceaseless chatter. As you walk by the shore under the moonlight, you look out to sea. A large boat is coming to shore. Many men and women, young and old, are on board, holding torches and pulling oars.

Quickly, you run back to the safety of the jungle. Behind you, men in armour clamber down the boat, barking orders to the passengers. Men begin to unload wooden planks while women carry children and little babies off the boat. Suddenly, life on the island has changed.

Two agents is easy, many agents is hard. Further complicating the problem are agents with different abilities, knowledge and objectives. Therefore, agents have to communicate with each other and figure out between themselves how to best use their ability to achieve their objectives.

In RoboCup Rescue Project, a disaster simulation software, the agent’s objective would be to effectively and efficiently rescue civilians after a disaster has struck a city. The agents take the role of the police, fire brigade and ambulance teams to deal with blocked roads, spreading fires and injured civilians.

Success in this research area can lead to changes in the way we manage disasters in the real world. As I do my research, here are some of the questions I face: How can agents learn from each other? Can agents design their own communication protocol? How to get agents to learn to cooperate using their differences?

As I work towards finding some answers to the questions, I hope to help make life for us all change for the better.

Terence Tan is a senior lecturer in the Department of Electrical and Computer Engineering of Curtin Sarawak’s School of Engineering and Science. He won the 2008 Excellence and Innovation in Teaching Award from Curtin University, Perth, Western Australia, and due to his experience and expertise, is often invited to speak to students on learning, leadership and technology. His current PhD research is on ‘Learning and Cooperating Multi Agent Systems’, which is essentially AI. In addition, he is a facilitator for the John Curtin Leadership.