Reducing Lag Time in Online Games
Predictions from a neural network could reduce characters’ jerky movements.
Gamers know the problem well: in the middle of an awesome, fast-paced battle, the action onscreen becomes slow and jerky. Suddenly, your character turns up dead, and you didn’t see who did it. In massively multiplayer online games, the problem of lag arises when a player’s computer can’t keep up with changes in a shared online world–and it can turn euphoria into frustration. New software being designed at the National University of Ireland, Maynooth, could help reduce the problem and may also have applications in military simulations.
“Ideally, somebody wants to drop down and play a game online with a bunch of other people and have the same experience they would have if everybody was in their home living room playing that game,” says Michael Katchabaw, an assistant professor of computer science at the University of Western Ontario. The problem, he explains, is that the players’ computers have to update each other on the players’ actions, and too many simultaneous updates can cause delays or overload the network. One way of reducing these problems, Katchabaw says, is a technique called dead reckoning.
Dead reckoning calls for each player’s computer to run a low-fidelity simulation of what’s going on in the game. At the same time, the computer runs a high-fidelity version that keeps precise track of the player’s actions and position. The computer constantly compares the two versions. If they don’t match, the computer sends an update to all the other participating computers, which can make the necessary corrections. While the computers must still broadcast updates, they don’t do it nearly as often as they otherwise would.
“Most well-known simulations and games actually use [dead reckoning] in one form or another,” including, for example, the popular computer game Quake, says Aaron McCoy, a postdoctoral researcher at the National University of Ireland and technical lead on the neural-network project. His group’s work is a way of improving on current dead-reckoning techniques.
McCoy and his colleagues’ neural-network system is at its best when it’s predicting erratic movements. Dead-reckoning systems assume that a game character will maintain the velocity and direction that it has at the moment an update is sent. That works fine for virtual bullets, McCoy says, but human-controlled avatars often exhibit fast, jerky movements.
McCoy’s system improves the process by installing a neural network in the player’s computer, adding another layer of prediction and enabling smarter updates. “What we’re trying to do with the neural networks is, we’re trying to say, ‘Look. We think that in half a second’s time we’re going to be here.’ So we’ll take that information into account and let the other computers know about it.” McCoy says that his system could reduce by 10 or 20 percent the 10 to 20 updates per second sent by many games, although he notes that the reduction fluctuates depending on the situation.
Although the system does make some additional demands on the user’s computer, McCoy says that they’re negligible compared with all the processing that goes into most massively multiplayer games. “In most games–even the large-scale ones–your own computer is only actually responsible for one entity: your own avatar,” McCoy says. “Because you’re only controlling one avatar, the neural networks only have to run for that one avatar.”Katchabaw says that the approach adopted by the Maynooth researchers could help make online games more consistent. He adds that dead reckoning was originally developed for military simulations, and, as a result, it and techniques related to it tend to work best for actions such as movement and shooting, and less well for actions such as interacting with objects or with other players.
Smooth shooter: Researchers at the National University of Ireland tested their neural network on a game, shown above, that they designed using the torque game engine.
Credit: Aaron McCoy, National University of Ireland
Tomas Ward, a senior lecturer in the electrical-engineering department of the National University of Ireland, who also participated in the research, says that the software the team is working on will be particularly aimed at improving the consistency of a user’s experience and will incorporate additional research the group has done on controlling the amount of traffic transmitted between participants in networked games. “Our code will look after that entity or that player over the network and make sure that everybody’s view of that player or that object over the whole session doesn’t stray too far from an accepted state,” he says. Ward says that the team expects to launch the software in beta in summer 2008.