Back in the 90s the first networked games started with so-called ‘LAN parties’, where friends would gather in-person with their computers or consoles to create a local area network (LAN) to play games together, against each other.
For years, this was the only way to reduce the network ‘latency’ to levels which would be acceptable to play games against real people, in real-time. Eventually, when the games did go online when given the choice of a game server, players had to be mindful to choose a fast server that was very close to their location or their gameplay would be useless.
Of course, exact latency requirements depend on the type of game being played. Generally, it is more important to the user experience for so-called first-person avatar games (ex. FPS and racing games) as opposed to third-person avatar (ex. role-playing, sports) and omnipresent (ex. real-time strategy) games.
While there are other measures of performance that may aﬀect online gameplay, such as packet loss and available bandwidth, player performance was always typically dominated by network latency (also called ‘lag’ by some game players).”
When playing for money, despite latency making no change to the rules or operation of the game, in some cases it was quickly noted that a delay could affect the game in real, noticeable ways; potentially skewing the odds in favour of those with lower latency. A player who’s further away, for instance, may have a disadvantage in the game because their higher latency acts as a handicap.
So, how can we get around this?
For ‘action’ games programmers commonly use certain client-side methods within the software to help trick gamers into thinking latency is lower than it is. One of these tactics is called ‘dead reckoning.’ Put simply, dead reckoning means that moving objects in the client’s ﬁeld of view are tracked, and their new positions are guessed based on their speed, acceleration and location data derived from the last packet received from the server.
Another practice is client and server time-stamping, which creates a simulation of the state of the game world, allowing client latency to be factored into gameplay. For instance, if a player sees a target in their crosshairs and shoots, it will register as a hit even if the latency means the target has actually moved.
Back to the eGaming world and the big recent change in gaming culture has been the shift in the market towards handheld or mobile gaming. Being able to provide comparative experiences across different devices, with different operating systems and across disparate networks; this only heightens the importance of reducing and controlling latency.
In the licensed online gaming world across the numerous licensed jurisdictions there are varying stipulations as to where the gaming servers must reside. In many cases though, the stipulations are focussed around where the transactions, Random Number Generation (RNG) and any servers housing identifiable player data are located.
As this isn’t the whole platform, there are options, and many of the leading operators are not only tuning their software to handle latency changes, but are also now operating a ‘split-stack’ environment. They are satisfying their regulatory obligations in the licensed jurisdiction, but are also placing web services, gameplay and some key content delivery as close to their players as possible – often across a network of global data centres.
What’s more, this global data centre network can be linked with private, direct connectivity – with guaranteed service levels and away from threats of DDoS attacks. A net result of this approach is lowered latency to the major internet hub points from diverse gaming locations, and more importantly ensuring consistency in latency and packet delivery to users.
Many operators have recognised that in a heavily competitive environment being closer to your player’s desktop PC or mobile handset could give your software, or your game the edge over a competitor, it can also give benefits in terms of avoiding localised internet issues – the closer you are, the odds of an internet-related issue drop.
Advancements in Cloud technology are also making it so much easier to move application workloads – not only between servers, but across continents, in a matter of seconds. Scaling up and scaling down has never been as fast; and the opportunities Cloud services present for development and testing are only just being realised. Operators are already reducing test & development platform deployment time from months to minutes; it’s a very exciting time.
What is clear is that for many of today’s real-time mobile applications and games the ‘traditional’ single hosting location may not be maximising the potential of the operator (and their software). Gone are the days when it would have been cost prohibitive too, it’s becoming the new ‘norm’ and is guaranteed to move you closer to your players – in every sense.
Article appeared in iGaming Business, March – April 2015